[HN Gopher] Guess I'm a Rationalist Now
       ___________________________________________________________________
        
       Guess I'm a Rationalist Now
        
       Author : nsoonhui
       Score  : 196 points
       Date   : 2025-06-19 10:22 UTC (12 hours ago)
        
 (HTM) web link (scottaaronson.blog)
 (TXT) w3m dump (scottaaronson.blog)
        
       | cue_the_strings wrote:
       | I feel like I'm witnessing something that Adam Curtis would cover
       | in the last part of The Century of Self, in real time.
        
         | greener_grass wrote:
         | There was always an underlying Randian impulse to the EA crowd
         | - as if we could solve any issue if we just get _the right
         | minds_ onto tackling the problem. The black-and-white thinking,
         | group think, hero worship and charicaturist literature are all
         | there.
        
           | cue_the_strings wrote:
           | I always wondered is it her direct influence, or is it just
           | that those characteristics naturally "go together".
        
       | roenxi wrote:
       | The irony here is the Rationalist community are made up of the
       | ones who weren't observant enough to pick that "identifying as a
       | Rationalist" is generally not a rational decision.
        
         | MichaelZuo wrote:
         | From what I've seen it's a mix of that, some who avoid the
         | issue, and some who do it intentionally even though they don't
         | really believe it.
        
       | voidhorse wrote:
       | These kinds of propositions are determined by history, not by
       | declaration.
       | 
       | Espouse your beliefs, participate in certain circles if you want,
       | but avoid labels _unless_ you intend to do ideological battle
       | with other label-bearers.
        
         | Sharlin wrote:
         | Bleh, labels can be restrictive, but guess what labels can also
         | be? _Useful_.
        
         | resource_waste wrote:
         | >These kinds of propositions are determined by history, not by
         | declaration.
         | 
         | A single failed prediction should revoke the label.
         | 
         | The ideal rational person should be pyrrhonian skeptic, or at a
         | minimum a bayesian epistemologist.
        
       | MeteorMarc wrote:
       | This is what rationalisme entails:
       | https://plato.stanford.edu/entries/rationalism-empiricism/
        
         | greener_grass wrote:
         | For any speed-runners out there:
         | https://en.wikipedia.org/wiki/Two_Dogmas_of_Empiricism
        
         | Sharlin wrote:
         | That's a different definition of rationalism from what is used
         | here.
        
           | AnimalMuppet wrote:
           | It is. But the Rationalists, by taking that name as a label,
           | are claiming that they are what the GP said. They want the
           | prestige/respect/audience that the word gets, without
           | actually being that.
        
             | FeepingCreature wrote:
             | (The rationalists never took that label, it is falsely
             | applied to them. The project is called rationality, not
             | rationalism. Unfortunately, this is now so pervasive that
             | there's no fixing it.)
        
               | AnimalMuppet wrote:
               | Hmm, interesting. Might I trouble you for your
               | definitions of rationality and rationalism?
               | 
               | (Not a "gotcha". I really want to know.)
        
               | FeepingCreature wrote:
               | Sure! Rationality is what Eliezer called his project
               | about teaching people to reason better (more empirically,
               | more probabilistically) in the events I described over
               | here: https://news.ycombinator.com/item?id=44320919 .
               | 
               | I don't know rationalism too well but I think it was a
               | historical philosophical movement asserting you could
               | derive knowledge by reasoning from axioms rather than
               | observation.
               | 
               | The primary difference here is that rationality mostly
               | teaches "use your reason to guide what to observe and how
               | to react to observations" rather than doing away with
               | observations altogether; it's basically an action loop
               | alternating between observation and belief propagation.
               | 
               | A prototypical/mathematical example of a pure LessWrong-
               | type "rational" reasoner is Hutter's AIXI (a definition
               | of the "optimal" next step given an input tape and a
               | goal), though it has certain known problems of self-
               | referentiality. Though of course reasoning in this way
               | does not work for humans; a large part of the Sequences
               | is attempts to port mathematically correct reasoning to
               | human cognition.
               | 
               | You can kind of read it as a continuation of early-2000s
               | internet atheism: instead of defining correct reasoning
               | by enumerating incorrect logic, ie. "fallacies", it
               | attempts to construct it positively, by describing what
               | to do rather than just what not to do.
        
       | amarcheschi wrote:
       | They call themselves rationalist, yet they don't have very
       | rational opinions if you ask them about scientific racism [1]
       | 
       | [1] https://www.astralcodexten.com/p/how-to-stop-worrying-and-
       | le...
        
         | wffurr wrote:
         | I am not sure precisely it not very rational about that link.
         | Did you have a specific point you were trying to make with it?
        
           | amarcheschi wrote:
           | Yes, that they're not "rational".
           | 
           | If you take a look at the biodiversity survey here
           | https://reflectivealtruism.com/2024/12/27/human-
           | biodiversity...
           | 
           | 1/3 of the users at acx actually support flawed scientific
           | theories that would explain iq on a scientific basis. The
           | Lynn study on iq is also quite flawed
           | https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations
           | 
           | If you want to read about human biodiversity,
           | https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute
           | 
           | As I said, it's not very rational of them to support such
           | theories. And of course as you scratch the surface, it's the
           | old 20th century racist theories, and of course those
           | theories are supported by (mostly white men, if I had to
           | guess) people claiming to be rational
        
             | derangedHorse wrote:
             | Nothing about the article you posted in your first comment
             | seems racist. You could argue that believing in the
             | conclusions of Richard Lynn's work makes someone racist,
             | but to support that claim, you'd need to show that those
             | who believe it do so out of willful ignorance of evidence
             | that his science is flawed.
        
               | amarcheschi wrote:
               | Scott itself makes a point of the study being debated.
               | It's not. It's not debated. It's pseudo science,or
               | "science" made with so many questionable points that it's
               | hard to call it "science". He links to a magazine article
               | written by a researcher that has been fired, not
               | surprisingly, for his pseudo scientific stances on racism
               | https://en.m.wikipedia.org/wiki/Noah_Carl
               | 
               | Saying in 2025 that the study is still debated is not
               | only racist, but dishonest as well. It's not debated,
               | it's junk
        
               | wizzwizz4 wrote:
               | It _is_ debated: just not by serious scholars or
               | academics. (Which doesn 't necessarily _make_ it wrong;
               | but  "scientific racism is bunk, and its proponents are
               | persuasive" is a model whose high predictive power has
               | served me well, so I believe it's wrong regardless.)
        
             | mjburgess wrote:
             | A lot of "rationalists" of this kind are very poorly
             | informed about statistical methodology, a condition they
             | inherit from reading papers written in these
             | pseudoscientific fields about people likewise very poorly
             | informed.
             | 
             | This is a pathology that has not really been addressed in
             | the large, anywhere, really. Very few in the applied
             | sciences who understand statistical methodology, "leave
             | their areas" -- and many areas that require it, would
             | disappear if it entered.
        
               | amarcheschi wrote:
               | I agree, I had to read things for an ethics course in IT
               | in uni that read more like science fiction than actual
               | science. Anyway, my point is that it feels pretentious -
               | very pretentious, and I'm being kind with words - to
               | support such pseudo scientific theories and call itself
               | rationalist. Especially when these teories can be
               | debunked just by reading the related Wikipedia page
        
               | saalweachter wrote:
               | More charitably, it is really, really hard to tell the
               | difference between a crank kicked out of a field for
               | being a crank, and an earnest researcher being persecuted
               | for not towing the political line, without being an
               | expert in the field in question and familiar with the
               | power structures involved.
               | 
               | A lot of people who like to think of themselves as
               | skeptical could also be categorized as contrarian -- they
               | are skeptical of institutions, and if someone is outside
               | an institution, that automatically gives them a certain
               | credibility.
               | 
               | There are three or four logical fallacies in the mix, and
               | if you throw in confirmation bias because what the one
               | side says appeals to your own prior beliefs, it is
               | really, really easy to convince yourself that you're the
               | steely-eyed rationalist perceiving the world correctly
               | while everyone else is deluded by their biases.
        
             | exoverito wrote:
             | Human ethnic groups are measurably different in genetic
             | terms, as based on single nucleotide polymorphisms and
             | allelic frequency. There are multiple PCA plots of the 1000
             | Genomes dataset which show clear cluster separation based
             | on ancestry:
             | 
             | https://www.researchgate.net/figure/Example-Ancestry-PCA-
             | plo...
             | 
             | We know ethnic groups vary in terms of height, hair color,
             | eye color, melanin, bone density, sprinting ability,
             | lactose tolerance, propensity to diseases like sickle cell
             | anemia, Tay-Sachs, stomach cancer, alcoholism risk, etc.
             | Certain medications need to be dosed differently for
             | different ethnic groups due to the frequency of certain
             | gene variants, e.g. Carbamazepine, Warfarin, Allopurinol.
             | 
             | The fixation index (Fst) quantifies the level of genetic
             | variation between groups, a value of 0 means no
             | differentiation, and 1 is maximal. A 2012 study based on
             | SNPs found that Finns and Swedes have a Fst value of
             | 0.0050-0.0110, Chinese and Europeans at 0.110, and Japanese
             | and Yoruba at 0.190.
             | 
             | https://pmc.ncbi.nlm.nih.gov/articles/PMC2675054/
             | 
             | A 1994 study based on 120 alleles found the two most
             | distant groups were Mbuti pygmies and Papua New Guineans at
             | a Fst of 0.4573.
             | 
             | https://en.wikipedia.org/wiki/File:Full_Fst_Average.png
             | 
             | In genome wide association studies, polygenic score have
             | been developed to find thousands of gene variants linked to
             | phenotypes like spatial and verbal intelligence, memory,
             | and processing speed. The distribution of these gene
             | variants is not uniform across ethnic groups.
             | 
             | Given that we know there are genetic differences between
             | groups, and observable variation, it stands to reason that
             | there could be a genetic component for variation in
             | intelligence between groups. It would be dogmatic to a
             | priori claim there is absolutely no genetic component, and
             | pretty obviously motivated out of the fear that inequality
             | is much more intractable than commonly believed.
        
               | mock-possum wrote:
               | ...but that just sounds like scientific racism.
               | 
               | Rather than judging an individual on their actual
               | intelligence, these kinds of statistical trends allow you
               | to justify judging an individual based on their race,
               | because you feel you can credibly claim that race is an
               | acceptable proxy for their genome, is an acceptable proxy
               | for their intelligence.
               | 
               | Or for their trustworthiness, or creativity, or
               | sexuality, or dutifulness, or compassion, or
               | aggressiveness, or alacrity, or humility, etc etc.
               | 
               | When you treat a person like a people, that's still
               | prejudice.
        
               | FeepingCreature wrote:
               | Well, the rational thing is obviously to be scared of
               | what ideas sound like.
               | 
               | > Rather than judging an individual on their actual
               | intelligence
               | 
               | Actual intelligence is hard to know! However, lots of
               | factors allow you to make a rapid initial estimate of
               | their actual intelligence, which you can then refine as
               | required.
               | 
               | (When the factors include apparent genetic heritage, this
               | is called "racism" and society doesn't like it. But that
               | doesn't mean it doesn't _work_ , just that you can get
               | fired and banned for doing it.)
               | 
               | ((This is of course why we must allow IQ tests for
               | hiring; then there's no _need_ to pay attention to skin
               | color, so liberals should be all for it.))
        
               | const_cast wrote:
               | > Well, the rational thing is obviously to be scared of
               | what ideas sound like.
               | 
               | Yes, actually. If an idea sounds like it can be used to
               | commit crimes against humanity, you should pause. You
               | should reassess said idea multiple times. You should be
               | skeptical. You shouldn't ignore that feeling.
               | 
               | What a lot of people are missing is _intent_ - the human
               | element. Why were these studies conducted? Who conducted
               | them?
               | 
               | If someone insane conducts a study then yes - that is
               | absolutely grounds to be skeptical of said study. It's
               | perfectly rationale. If extremely racist people produce
               | studies which just so happen to be racist, we should take
               | a step back and go "hmm".
               | 
               | Being right or being correct is one thing, but it's not
               | absolutely valuable. The end-result and how "bad" it is
               | also matters, and often times it matters more. And,
               | elephant in the room, nobody actually knows if they're
               | right. Making logical conclusions isn't so, because you
               | are forced to make thousands of assumptions.
               | 
               | You might be right, you might not be. Let's all have some
               | humility.
        
               | stonogo wrote:
               | The assertion that "actual intelligence is hard to know"
               | followed almost immediately by "apparent genetic
               | heritage" is what's wrong with your opinion. And no, it
               | doesn't work -- at least it doesn't work for identifying
               | intelligence. It just works for selecting people who
               | appeal to your bias.
               | 
               | IQ tests are not actual measurements of anything; this is
               | both because nobody has a rigorous working definition of
               | intelligence and because nobody's figured out a universal
               | method of measuring achievement of what insufficient
               | definitions we have. Their proponents are more interested
               | in pigeonholing people than actually measuring anything
               | anyway.
               | 
               | And as a hiring manager, I'd hire an idiot who is good at
               | the job over a genius who isn't.
        
               | wizzwizz4 wrote:
               | Your treatment of IQ is ridiculous. Give me access to a
               | child for seven months, and I can increase their IQ score
               | by 20 (and probably make myself an enemy in the process:
               | IQ test drills are one of the _dullest_ activities, since
               | you can 't even switch your brain off while doing them).
               | 
               | Intelligence is not a single axis thing. IQ test results
               | are _significantly_ influenced by socioeconomic factors.
               | "Actual intelligence is hard to know" because it _doesn
               | 't exist_.
               | 
               | I have never yet known scientific racism to produce true
               | results. I _have_ known a lot of people to say the sorts
               | of things you 're saying: evidence-free claims that
               | racism is fine so long as you're doing the Good Racism
               | that Actually Works(tm), I Promise, This Time It's Not
               | Prejudice Because It's Justified(r).
               | 
               | No candidate genetic correlate of the g factor has ever
               | replicated. That should be a _massive flashing warning
               | sign_ that - rather than having identified an elusive
               | fact about reality that just so happens to not appear in
               | any rigorous study - maybe you 're falling afoul of the
               | same in-group/out-group bias as nearly every group of
               | humans since records begin.
               | 
               | Since I have no reason to believe your heuristic is
               | accurate, we _can_ stop there. However, to further
               | underline that you 're _not_ thinking rationally: even if
               | blue people were (on average) 2x as capable at spacial
               | rotation-based office jobs than green people, it _still_
               | wouldn 't be a good idea to start with the skin colour
               | prior and update from there, because that would lead to
               | the creation of caste systems, which hinder social
               | mobility. Even if scientific racism worked (which it
               | hasn't to date!), the rational approach would _still_ be
               | to judge people on their own merits.
               | 
               | If you find it hard to assess the competence of your
               | subordinates, to the point where you're resorting to
               | population-level stereotypes to make hiring decisions,
               | you're an incompetent manager and should find another
               | job.
        
               | gadders wrote:
               | As we all know, genetics and evolution only apply from
               | the neck down.
        
               | dennis_jeeves2 wrote:
               | True, nature is egalitarian although only intracranialy.
        
           | pixodaros wrote:
           | In that essay Scott Alexander more or less says "so Richard
           | Lynn made up numbers about how stupid black and brown people
           | are, but we all know he was right if those mean scientists
           | just let us collect the data to prove it." The level of
           | thinking most of us moved past in high school, and he is a MD
           | who sees himself as a Public Intellectual! More evidence that
           | thinking too much about IQ makes people stupid.
        
       | contrarian1234 wrote:
       | The article made me think deeper about what rubs me the wrong way
       | about the whole movement
       | 
       | I think there is some inherent tension btwn being "rational"
       | about things and trying to reason about things from first
       | principle.. And the general absolutist tone of the community. The
       | people involved all seem very... Full of themselves ? They don't
       | really ever show a sense of "hey, I've got a thought, maybe I
       | haven't considered all angles to it, maybe I'm wrong - but here
       | it is". The type of people that would be embarrassed to not have
       | an opinion on a topic or say "I don't know"
       | 
       | In the Pre-AI days this was sort of tolerable, but since then..
       | The frothing at the mouth convinced of the end of the world..
       | Just shows a real lack of humility and lack of acknowledgment
       | that maybe we don't have a full grasp of the implications of AI.
       | Maybe it's actually going to be rather benign and more boring
       | than expected
        
         | Avicebron wrote:
         | Yeah the "rational" part always seemed a smokescreen for the
         | ability to produce and ingest their own and their associates
         | methane gases.
         | 
         | I get it, I enjoyed being told I'm a super genius always right
         | quantum physicist mathematician by the girls at Stanford too.
         | But holy hell man, have some class, maybe consider there's more
         | good to be done in rural Indiana getting some dirt under those
         | nails..
        
           | Cthulhu_ wrote:
           | It feels like a shield of sorts, "I am a rationalist
           | therefore my opinion has no emotional load, it's just facts
           | bro how dare you get upset at me telling xyz is such-and-such
           | you are being irrational do your own research"
           | 
           | but I don't know enough about it, I'm just trolling.
        
           | shermantanktop wrote:
           | The meta with these people is "my brilliance comes with an
           | ego that others must cater to."
           | 
           | I find it sadly hilarious to watch academic types fight over
           | meaningless scraps of recognition like toddlers wrestling for
           | a toy.
           | 
           | That said, I enjoy some of the rationalist blog content and
           | find it thoughtful, up to the point where they bravely allow
           | their chain of reasoning to justify antisocial ideas.
        
           | dkarl wrote:
           | It's a conflict as old as time. What do you do when an
           | argument leads to an unexpected conclusion? I think there are
           | two good responses: "There's something going on here, so
           | let's dig into it," or, "There's something going on here, but
           | I'm not going to make time to dig into it." Both equally
           | valid.
           | 
           | In real life, the conversation too often ends up being, "This
           | has to be wrong, and you're an obnoxious nerd for bothering
           | me with it," versus, "You don't understand my argument, so I
           | am smarter, and my conclusions are brilliantly subversive."
        
             | bilbo0s wrote:
             | Might kind of point to real life people having too much of
             | what is now called, _" rationality"_, and very little of
             | what used to be called _" wisdom"_?
        
               | yamazakiwi wrote:
               | Wisdom tends to resemble shallow aphorisms despite being
               | framed as universal. Rather than interrogating wisdom's
               | relevance or depth, many people simply repeat it
               | uncritically as a shortcut to insight. This reflects more
               | about how people use wisdom than the content itself, but
               | I believe that behavior contributes to our perception of
               | the importance of wisdom.
               | 
               | It frequently reduces complex problems into comfortable
               | oversimplifications.
               | 
               | Maybe you don't think that is real wisdom, and maybe
               | that's sort of your point, but then what does real wisdom
               | look like? Should wisdom make you considerate of the
               | multiple contexts it does and doesn't affect? Maybe the
               | issue is we need to better understand how to evaluate and
               | use wisdom. People who truly understand a piece of wisdom
               | should communicate deeply rather than parroting
               | platitudes.
               | 
               | Also to be frank, wisdom is a way of controlling how
               | others perceive a problem, and is a great way to
               | manipulate others by propping up ultimatums or forcing
               | scope. Much of past wisdom is unhelpful or highly
               | irrelevant to modern life.
               | 
               | e.g. "Good things come to those who wait."
               | 
               | Passive waiting rarely produces results. Initiative,
               | timing, and strategic action tend to matter more than
               | patience.
        
         | felipeerias wrote:
         | The problem with trying to reason everything from first
         | principles is that most things didn't actually came about that
         | way.
         | 
         | Both our biology and other complex human affairs like societies
         | and cultures evolved organically over long periods of time,
         | responding to their environments and their competitors,
         | building bit by bit, sometimes with an explicit goal but often
         | without one.
         | 
         | One can learn a lot from unicellular organisms, but won't
         | probably be able to reason from them all the way to an
         | elephant. At best, if we are lucky, we can reason back from the
         | elephant.
        
           | loose-cannon wrote:
           | Reducibility is usually a goal of intellectual pursuits? I
           | don't see that as a fault.
        
             | colordrops wrote:
             | What the person you are replying to is saying that some
             | things are not reducible, i.e. the the vast array of
             | complexity and detail is all relevant.
        
               | loose-cannon wrote:
               | That's a really hard belief to justify. And what
               | implications would that position have? Should biologists
               | give up?
        
               | the_af wrote:
               | Biologists don't try to reason everything from first
               | principles.
               | 
               | Actually, neither do Rationalists, but instead they
               | cosplay at being rational.
        
               | falcor84 wrote:
               | > Biologists don't try to reason everything from first
               | principles.
               | 
               | What do you mean? The biologists I've had the privilege
               | of working with absolutely do try to. Obviously some work
               | at a higher level of abstraction than others, but I've
               | not met any who apply any magical thinking to the actual
               | biological investigation. In particular (at least in my
               | milieu), I have found that the typical biologist is more
               | likely to consider quantum effects than the typical
               | physicist. On the other hand (again, from my limited
               | experience), biologists do tend to have some magical
               | thinking about how statistics (and particularly
               | hypothesis testing) works, but no one is perfect.
        
               | svnt wrote:
               | Setting up reasoning from first principles vs magical
               | thinking is a false dichotomy and an implicit swipe.
        
               | falcor84 wrote:
               | Ok, mea culpa. So what distinction did you have in mind?
        
               | Veen wrote:
               | It would imply that when dealing with complex systems,
               | models and conceptual frameworks are, at the very best,
               | useful approximations. It would also imply that it is
               | foolhardy to ignore phenomena simply because they are not
               | comprehensible within your preferred framework. It does
               | not imply biologists should give up.
        
               | pixl97 wrote:
               | How reducible is the question. If some particular events
               | require a minimum amount of complexity, how to do you
               | reduce it below that?
        
               | __MatrixMan__ wrote:
               | I think that chemistry, physics, and mathematics, are
               | engaged in a program of understanding their subject in
               | terms of the sort of first principles that Descartes was
               | after. Reduction of the subject to a set of simpler
               | thoughts that are outside of it.
               | 
               | Biologists stand out because they have already given up
               | on that idea. They may still seek to simplify complex
               | things by refining principles of some kind, but it's a
               | "whatever stories work best" approach. More Feyerabend,
               | less Popper. Instead of axioms they have these patterns
               | that one notices after failing to find axioms for a
               | while.
        
               | lukas099 wrote:
               | On the other hand, bio is the branch of science with a
               | single accepted "theory of everything": evolution.
        
               | achierius wrote:
               | Concretely we _know_ that there exist irreducible
               | structures, at least in mathematics: https://en.wikipedia
               | .org/wiki/Classification_of_finite_simpl...
               | 
               | The largest of the finite simple groups (themselves
               | objects of study as a means of classifying other, finite
               | but non-simple groups, which can always be broken down
               | into simple groups) is the Monster Group -- it has order
               | 808017424794512875886459904961710757005754368000000000,
               | and cannot be reduced to simpler "factors". It has a
               | whole bunch of very interesting properties which thus can
               | only be understood by analyzing the whole object in
               | itself.
               | 
               | Now whether this applies to biology, I doubt, but it's
               | good to know that limits do exist, even if we don't know
               | exactly where they'll show up in practice.
        
               | whatshisface wrote:
               | That's not really true, otherwise every paper about it
               | would be that many words long. The monster group can be
               | "reduced" into its definition and its properties which
               | can only be considered a few at a time. A person has a
               | working memory of three to seven items.
        
             | jltsiren wrote:
             | "Reductionist" is usually used as an insult. Many people
             | engaged in intellectual pursuits believe that reductionism
             | is not a useful approach to studying various topics. You
             | may argue otherwise, but then you are on a slippery slope
             | towards politics and culture wars.
        
               | js8 wrote:
               | I would not be so sure. There are many fields where
               | reductionism was applied in practice and it yielded
               | useful results, thanks to computers.
               | 
               | Examples that come to mind: statistical modelling
               | (reduction to nonparametric models), protein folding
               | (reduction to quantum chemistry), climate/weather
               | prediction (reduction to fluid physics), human language
               | translation (reduction to neural networks).
               | 
               | Reductionism is not that useful as a theory building
               | tool, but reductionist approaches have a lot of practical
               | value.
        
               | gilleain wrote:
               | > protein folding (reduction to quantum chemistry),
               | 
               | I am not sure in what sense folding simulations are
               | reducable to quantum chemistry. There are interesting
               | 'hybrid' approaches where some (limited) quantum
               | calculations are done for a small part of the structure -
               | usually the active site I suppose - and the rest is done
               | using more standard molecular mechanics/molecular
               | dynamics approaches.
               | 
               | Perhaps things have progressed a lot since I worked in
               | protein bioinformatics. As far as I know, even extremely
               | short simulations at the quantum level were not possible
               | for systems with more than a few atoms.
        
               | jltsiren wrote:
               | I meant that the word "reductionist" is usually an
               | accusation of ignorance. It's not something people doing
               | reductionist work actually use.
        
               | nyeah wrote:
               | But that common use of the word is ignorant nonsense. So,
               | yes, someone is wrong on the internet. So what?
        
               | jltsiren wrote:
               | The context here was a claim that reducibility is usually
               | a goal of intellectual pursuits. Which is empirically
               | false, as there are many academic fields with a negative
               | view of reductionism.
        
               | nyeah wrote:
               | 'Reductionist' can be an insult. It can also be an
               | uncontroversial observation, a useful approach, or a
               | legitimate objection to that approach.
               | 
               | If you're looking for insults, and declaring the whole
               | conversation a "culture war" as soon as you think you
               | found one, (a) you'll avoid plenty of assholes, but (b)
               | in the end you will read whatever you want to read, not
               | what the thoughtful people are actually writing.
        
             | nyrikki wrote:
             | 'Reducibility' is a property _if present_ that makes
             | problems tractable or possibly practical.
             | 
             | What you are mentioning is called western reductionism by
             | some.
             | 
             | In the western world it does map to Plato etc, but it is
             | also a problem if you believe everything is reducible.
             | 
             | Under the assumption that all models are wrong, but some
             | are useful, it helps you find useful models.
             | 
             | If you consider Laplacian determinism as a proxy for
             | reductionism, Cantor diagonalization and the standard model
             | of QM are counterexamples.
             | 
             | Russell's paradox is another lens into the limits of Plato,
             | which the PEM assumption is based on.
             | 
             | Those common a priori assumptions have value, but are
             | assumptions which may not hold for any particular problem.
        
             | nyeah wrote:
             | Ok. A lot of things are very 'reducible' but information is
             | lost. You can't extend back from the reduction to the
             | original domain.
             | 
             | Reduce a computer's behavior to its hardware design, state
             | of RAM, and physical laws. All those voltages make no sense
             | until you come up with the idea of stored instructions,
             | division of the bits into some kind of memory space, etc.
             | You may say, you can predict the future of the RAM. And
             | that's true. But if you can't read the messages the
             | computer prints out, then you're still doing circuits, not
             | software.
             | 
             | Is that reductionist approach providing valuable insight?
             | YES! Is it the whole picture? No.
             | 
             | This warning isn't new, and it's very mainstream. https://w
             | ww.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...
        
           | ImaCake wrote:
           | >The problem with trying to reason everything from first
           | principles is that most things didn't actually came about
           | that way.
           | 
           | This is true for science and rationalism itself. Part of the
           | problem is that "being rational" is a social fashion or fad.
           | Science is immensely useful because it produces real results,
           | but we don't _really_ do it for a rational reason - we do it
           | for reasons of cultural and social pressures.
           | 
           | We would get further with rationalism if we remembered or
           | maybe admitted that we do it for reasons that make sense only
           | in a complex social world.
        
             | baxtr wrote:
             | Yes, and if you read Popper that's exactly how he defined
             | rationality / the scientific method: to solve problems of
             | life.
        
             | lsp wrote:
             | A lot of people really need to be reminded of this.
             | 
             | I originally came to this critique via Heidegger, who
             | argues that enlightenment thinking essentially forgets /
             | obscures Being itself, a specific mode of which you
             | experience at this very moment as you read this comment,
             | which is really the basis of everything that we know,
             | including science, technology, and rationality. It seems
             | important to recover and deepen this understanding if we
             | are to have any hope of managing science and technology in
             | a way that is actually beneficial to humans.
        
         | cjs_ac wrote:
         | I think the absolutism is kind of the point.
        
         | ineedaj0b wrote:
         | rationalism got pretty lame the last 2-3 years. imo the peak
         | was trying to convince me to donate a kidney.
         | 
         | post-rationalism is where all the cool kids are and where the
         | best ideas are at right now. the post rationalists consistently
         | have better predictions and the 'rationalists' are stuck
         | arguing whether chickens suffer more getting factory farmed or
         | chickens cause more suffering eating bugs outside.
         | 
         | they also let SF get run into the ground until their detractors
         | decided to take over.
        
           | josephg wrote:
           | Where do the post rats hang out these days? I got involved in
           | the stoa during covid until the online community fragmented.
           | Are there still events & hangouts?
        
             | jes5199 wrote:
             | postrats were never a coherent group but a lot of people
             | who are at https://vibe.camp this weekend probably identify
             | with the label. some of us are still on twitter/X
        
             | Trasmatta wrote:
             | Not "post rat", but r/SneerClub is good for criticisms of
             | rationalists (some from former rationalists)
        
               | ackfoobar wrote:
               | Their sneering is just that. Sneering, not interesting
               | critiques.
        
             | astrange wrote:
             | They're a group called "tpot" on twitter, but it's unclear
             | what's supposed to be good about them.
             | 
             | There's kind of two clusters, one is people who talk about
             | meditation all the time, the other is center-right people
             | who did drugs once. I think the second group showed up
             | because rationalists are not-so-secretly into scientific
             | racism (because they believe anything they see with numbers
             | in it) and they just wanted to hang out with people like
             | that.
             | 
             | There is an interesting atmosphere where it feels like they
             | observed California big tech 1000x engineer types and are
             | trying to cargo cult the way those people behave. I'm not
             | sure what they get out of it.
        
         | hiAndrewQuinn wrote:
         | >Maybe it's actually going to be rather benign and more boring
         | than expected
         | 
         | Maybe, but generally speaking, if I think people are playing
         | around with technology which a lot of smart people think might
         | end humanity as we know it, I would want them to stop until we
         | are really sure it won't. Like, "less than a one in a million
         | chance" sure.
         | 
         | Those are big stakes. I would have opposed the Manhattan
         | Project on the same principle had I been born 100 years
         | earlier, when people were worried the bomb might ignite the
         | world's atmosphere. I oppose a lot of gain-of-function virus
         | research today too.
         | 
         | That's not a point you have to be a rationalist to defend. I
         | don't consider myself one, and I wasn't convinced by them of
         | this - I was convinced by Nick Bostrom's book
         | _Superintelligence_ , which lays out his case with most of the
         | assumptions he brings to the table laid bare. Way more in the
         | style of Euclid or Hobbes than ... whatever that is.
         | 
         | Above all I suspect that the Internet rationalists are
         | basically a 30 year long campaign of "any publicity is good
         | publicity" when it comes to existential risk from
         | superintelligence, and for what it's worth, it seems to have
         | worked. I don't hear people dismiss these risks very often as
         | "You've just been reading too many science fiction novels"
         | these days, which would have been the default response back in
         | the 90s or 2000s.
        
           | s1mplicissimus wrote:
           | > I don't hear people dismiss these risks very often as
           | "You've just been reading too many science fiction novels"
           | these days, which would have been the default response back
           | in the 90s or 2000s.
           | 
           | I've recently stumbled across the theory that "it's gonna go
           | away, just keep your head down" is the crisis response that
           | has been taught to the generation that lived through the cold
           | war, so that's how they act. That bit was in regards to
           | climate change, but I can easily see it apply to AI as well
           | (even though I personally believe that the whole "AI eat
           | world" arc is only so popular due to marketing efforts of the
           | corresponding industry)
        
             | hiAndrewQuinn wrote:
             | It's possible, but I think that's just a general human
             | response when you feel like you're trapped between a rock
             | and a hard place.
             | 
             | I don't buy the marketing angle, because it doesn't
             | actually make sense to me. Fear draws eyeballs, sure, but
             | it just seems otherwise nakedly counterproductive, like a
             | burger chain advertising itself on the brutality of its
             | factory farms.
        
               | lcnPylGDnU4H9OF wrote:
               | > like a burger chain advertising itself on the brutality
               | of its factory farms
               | 
               | It's rather more like the burger chain decrying the
               | brutality as a reason for other burger chains to be
               | heavily regulated (don't worry about them; they're the
               | guys you can trust and/or they are practically already
               | holding themselves to strict ethical standards) while
               | talking about how _delicious_ and _juicy_ their meat
               | patties are.
               | 
               | I agree about the general sentiment that the technology
               | is dangerous, especially from a "oops, our agent stopped
               | all of the power plants" angle. Just... the messaging
               | from the big AI services is both that and marketing hype.
               | It seems to get people to disregard real dangers as
               | "marketing" and I think that's because the actual
               | marketing puts an outsized emphasis on the dangers.
               | (Don't hook your agent up to your power plant controls,
               | please and thank you. But I somehow doubt that OpenAI and
               | Anthropic will not be there, ready and willing, despite
               | the dangers they are oh so aware of.)
        
               | hiAndrewQuinn wrote:
               | That is how I normally hear the marketing theory
               | described when people go into it in more detail.
               | 
               | I'm glad you ran with my burger chain metaphor, because
               | it illustrates why I think it doesn't work for an AI
               | company to intentionally try and advertise themselves
               | with this kind of strategy, let alone ~all the big
               | players in an industry. Any ordinary member of the
               | burger-eating public would be turned off by such an
               | advertisement. Many would quickly notice the unsaid
               | thing; those not sharp enough to would probably just see
               | the descriptions of torture and be less likely on the
               | margin to go eat there instead of just, like, safe happy
               | McDonald's. Analogously we have to ask ourselves why
               | there seems to be no Andreessen-esque major AI lab that
               | just says loud and proud, "Ignore those lunatics.
               | Everything's going to be fine. Buy from us." That seems
               | like it would be an excellent counterpositioning strategy
               | in the 2025 ecosystem.
               | 
               | Moreover, if the marketing theory is to be believed,
               | these kinds of psuedo-ads are _not_ targeted at the
               | lowest common denominator of society. Their target is
               | people with sway over actual regulation. Such an audience
               | is going to be much more discerning, for the same reason
               | a machinist vets his CNC machine advertisements much more
               | aggressively than, say, the TVs on display at Best Buy.
               | The more skin you have in the game, the more sense it
               | makes to stop and analyze.
               | 
               | Some would argue the AI companies know all this, and are
               | gambling on the chance that they are able to get
               | regulation through _and_ get enshrined as some state-
               | mandated AI monopoly. A well-owner does well in a desert,
               | after all. I grant this is a possibility. I do not think
               | the likelihood of success here is very high. It was
               | higher back when OpenAI was the only game in town, and I
               | had more sympathy for this theory back in 2020-2021, but
               | each serious new entrant cuts this chance down
               | multiplicatively across the board, and by now I don 't
               | think anyone could seriously pitch that to their
               | investors as their exit strategy and expect a round of
               | applause for their brilliance.
        
               | ummonk wrote:
               | It's also reasonable as a Pascal's wager type of thing.
               | If you can't affect the outcome, just prepare for the
               | eventuality that it will work out because if it doesn't
               | you'll be dead anyway.
        
           | socalgal2 wrote:
           | Do you think opposing the manhattan project would have lead
           | to a better world?
           | 
           | note, my assumption is not that the bomb would not have been
           | developed. Only that by opposing the manhattan project the
           | USA would not have developed it first.
        
             | hiAndrewQuinn wrote:
             | My answer is yes, with low-moderate certainty. I still
             | think the USA would have developed it first, and I think
             | this is what is suggested to us by the GDP trends of the US
             | versus basically everywhere else post-WW2.
             | 
             | Take this all with more than a few grains of salt. I am by
             | no means an expert in this territory. But I don't shy away
             | from thinking about something just because I start out
             | sounding like an idiot. Also take into account this is
             | _post-hoc_ , and 1940 Manhattan Project me would obviously
             | have had much, much less information to work with about how
             | things actually panned out. My answer to this question
             | should be seen as separate to the question of whether I
             | think dodging the Manhattan Project would have been a good
             | bet, so to speak.
             | 
             | Most historians agree that Japan was going to lose one way
             | or another by that point in the war. Truman argued that
             | dropping the bomb killed fewer people in Japan than
             | continuing, which I agree with, but that's a relatively
             | small factor in the calculation.
             | 
             | The much bigger factor is that the success of the Manhattan
             | Project as an ultimate existence proof for the possibility
             | of such weaponry almost certainly galvanized the Soviet
             | Union to get on the path of building it themselves much
             | more aggressively. A Cold War where one side takes
             | substantially longer to get to nukes is mostly an obvious
             | x-risk win. Counterfactual worlds can never be seen with
             | certainty, but it wouldn't surprise me if the mere
             | existence proof led the USSR to actually create their own
             | atomic weapons a decade faster than they would have
             | otherwise, by e.g. motivating Stalin to actually care about
             | what all those eggheads were up to (much to the terror of
             | said eggheads).
             | 
             | This is a bad argument to advance when we're arguing about
             | e.g. the invention of calculus, which as you'll recall was
             | coinvented in at least 2 places (Newton with fluxions,
             | Liebniz with infinitesimals I think), but calculus was the
             | kind of thing that could be invented by one smart guy in
             | his home office. It's a much more believable one when the
             | only actors who could have made it were huge state-
             | sponsored laboratories in the US and the USSR.
             | 
             | If you buy that, that's 5 to 10 extra years the US would
             | have had in order to do something _like_ the Manhattan
             | Project, but in much more controlled, peace-time
             | environments. The atmosphere-ignition prior would have been
             | stamped out pretty quickly by later calculations of
             | physicists to the contrary, and after that research would
             | have gotten back to full steam ahead. I think the
             | counterfactual US would have gotten onto the atom bomb in
             | the early 1950s at the absolute latest with the talent they
             | had in an MP-less world. Just with much greater safety
             | protocols, and without the Russians learning of it in such
             | blatant fashion. Our abilities to _detect_ such weapons
             | being developed elsewhere would likely have also stayed far
             | ahead of the Russians. You could easily imagine a situation
             | where the Russians finally create a weapon in 1960 that was
             | almost as powerful as what we had cooked up by 1950.
             | 
             | Then you're more or less back to an old-fashioned
             | deterrence model, with the twist that the Russians don't
             | actually know exactly how powerful the weapons the US has
             | developed are. This is an absolute good: You can always
             | choose to reveal just a lower bound of how powerful your
             | side is, if you think you need to, or you can choose to
             | remain totally cloaked in darkness. If you buy the
             | narrative that the US were "the good guys" (I do!) and
             | wouldn't risk armaggedon just because they had the upper
             | hand, then this seems like it can only make the future arc
             | of the (already shorter) Cold War all the safer.
             | 
             | I am assuming Gorbachev or someone still called this whole
             | circus off around the late 80s-early 90s. Gotta trim the
             | butterfly effect somewhere.
        
         | voidhorse wrote:
         | To me they have always seemed like a breed of "intellectuals"
         | who only want to use knowledge to inflate their own egos and
         | maintain a fragile superiority complex. They are't actually
         | interested in the truth so much as they are interested in
         | convincing you that _they_ are right.
        
         | camgunz wrote:
         | Yeah I don't know or really care about Rationalism or whatever.
         | But I took Aaronson's advice and read Zvi Mowshowitz'
         | _Childhood and Education #9: School is Hell_ [0], and while I
         | share many of the criticisms (and cards on the table I also had
         | pretty bad school experiences), I would have a hard time
         | jumping onto this bus.
         | 
         | One point is that when Mowshowitz is dispelling the argument
         | that abuse rates are much higher for homeschooled kids, he (and
         | the counterargument in general) references a study [1] showing
         | that abuse rates for non-homeschooled kids are similarly high:
         | both around 37%. That paper's no good though! Their conclusion
         | is "We estimate that 37.4% of all children experience a child
         | protective services investigation by age 18 years." 37.4%?
         | That's 27m kids! How can CPS run so many investigations? That's
         | 4k investigations _a day_ over 18 years, no holidays or
         | weekends. Nah. Here are some good numbers (that I got to from
         | the bad study, FWIW) [2], they 're around 4.2%.
         | 
         | But, more broadly, the worst failing of the US educational
         | system isn't how it treats smart kids, it's how it treats kids
         | for whom it fails. If you're not the 80% of kids who can
         | somehow make it in the school system, you're doomed.
         | Mowshowitz' article is nearly entirely dedicated to how hard it
         | is to liberate your suffering, gifted student from the prison
         | of public education. This is a real problem! I agree it would
         | be good to solve it!
         | 
         | But, it's just not _the_ problem. Again I 'm sympathetic to and
         | agree with a lot of the points in the article, but you can
         | really boil it down to "let smart, wealthy parents homeschool
         | their kids without social media scorn". Fine, I guess. No one's
         | stopping you from deleting your account and moving to
         | California. But it's not an efficient use of resources--and
         | it's certainly a terrible political strategy--to focus on such
         | a small fraction of the population, and to be clear this is the
         | absolute nicest way I can characterize these kinds of policy
         | positions. This thing is going nowhere as long as it stays so
         | self-obsessed.
         | 
         | [0]: https://thezvi.substack.com/p/childhood-and-
         | education-9-scho...
         | 
         | [1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5227926/
         | 
         | [2]:
         | https://acf.gov/sites/default/files/documents/cb/cm2023.pdf
        
           | ummonk wrote:
           | > but you can really boil it down to "let smart, wealthy
           | parents homeschool their kids without social media scorn"
           | 
           | The whole reason smart people are engaging in this debate in
           | the first place is that professional educators keep trying to
           | train their sights on smart wealthy parents homeschooling
           | their kids.
           | 
           | By the way, this small fraction of the population is
           | responsible for the driving the bulk of R&D.
        
             | camgunz wrote:
             | I mean, I'm fine addressing Tabarrok's argument head on: I
             | think there's far more to gain helping the millions of
             | kids/adults who are functionally illiterate than helping
             | the small number of gifted kids the educational system is
             | underserving. His argument is essentially "these kids will
             | raise the tide and lift all boats", but it's clear that
             | although the tide has been rising for generations (advances
             | in the last 60-70 years are truly breathtaking) more kids
             | are being left behind, not fewer. There's no reason to
             | expect this dynamic to change unless we tackle it directly.
        
           | genewitch wrote:
           | My wife is LMSW (not CPS!) and sees ~5 people a day. 153,922
           | population in the metro area. Mind you, this is adults, but
           | they're all mandated to show up.
           | 
           | there's only ~3300 counties in the USA.
           | 
           | i'll let you extrapolate how CPS can handle "4000/day". Like,
           | 800 people with my wife's qualifications and caseload is
           | equivalent to 4000/day. there's ~5000 caseworkers in the US
           | per statistia:
           | 
           | > In 2022, there were about 5,036 intake and screening
           | workers in child protective services in the United States. In
           | total, there were about 30,750 people working in child
           | protective services in that year.
        
             | verall wrote:
             | 37% of children obviously do not experience a CPS
             | investigation before age 18.
        
               | genewitch wrote:
               | not what i am speaking to. I don't know the number, and
               | neither do you. you'd have to call those 5000 CPS
               | caseworkers and ask them what their caseload is (it's 69
               | per caseworker on average across the US. that's a third
               | of a million cases, in aggregate across all caseworkers)
               | 
               | my wife's caseload (adults) "floats around fifty."
        
               | verall wrote:
               | > not what i am speaking to
               | 
               | My misunderstanding then - what are you speaking to? Even
               | reading this comment, I still don't understand.
        
               | genewitch wrote:
               | >> 37.4%? That's 27m kids! How can CPS run so many
               | investigations? That's 4k investigations a day over 18
               | years,
               | 
               | > 800 people with my wife's qualifications and caseload
               | is equivalent to 4000/day. there's ~5000 caseworkers in
               | the US
               | 
               | I don't know what the number of children in the system
               | is. as i said in the comment you replied to, here. but
               | the average US CPS worker caseload is 69 cases. which is
               | over 300,000 children per year, because there are ~5000
               | CPS caseworkers in the US.
               | 
               | I was _only_ speaking to  "how do they 'run' that many
               | investigations?" as if it's impossible. I pointed out
               | it's possible with ~1000 caseworkers.
        
             | camgunz wrote:
             | Yeah OK I can see that. Mostly you inspired me to do a
             | little napkin math based on the report I linked, which says
             | ~3.1m kids got CPS investigations (etc) in 2023, which is
             | ~8,500 a day. But, the main author in a subsequent paper
             | shows that only ~13% of kids have confirmed maltreatment
             | [0]. That's still far lower than the 38% for homeschooled
             | kids.
             | 
             | [0]: https://pmc.ncbi.nlm.nih.gov/articles/PMC5087599/
        
               | genewitch wrote:
               | I wonder if the CPS on homeschooled children rate is from
               | people who had their children in school and then "pulled
               | them out" vs people who never had their children in
               | school at all. As some comedian said "you're on the grid
               | [...], they have your footprint"; i know it used to be
               | "known" that school districts go after the former because
               | it literally loses them money to lose a student, whereas
               | with the latter, the kid isn't on the books.
               | 
               | also i wasn't considering "confirmed maltreatment" - just
               | the fact that 4k/day isn't "impossible"
        
           | matthewdgreen wrote:
           | Cherry-picking friendly studies is one of the go-to moves of
           | the rationalist community.
           | 
           | You can convince _a lot of people_ that you 've done your
           | homework when the medium is "an extremely blog post with a
           | bunch of studies attached" even if the studies themselves
           | aren't representative of reality.
        
             | tasty_freeze wrote:
             | Is there any reason you are singling out the rationalist
             | community? Is that not a common failure mode of all groups
             | and all people?
             | 
             | BTW, this isn't a defensive posture on my part: I am not
             | plugged in enough to even have an opinion on any
             | rationalist community, much less identify as one.
        
         | dv_dt wrote:
         | The rationalist discussions rarely consider what should be the
         | baseline assumption of what if one or more of the logical
         | assumptions or associations are wrong. They also tend to not
         | systematically plan to validate. And in many domains - what
         | could hold true for one moment can easily shift.
        
           | resource_waste wrote:
           | 100%
           | 
           | Rationalism is an ideal, yet those who label themselves as
           | such do not realize their base of knowledge could be wrong.
           | 
           | They lack an understanding of epistemology and it gives them
           | confidence. I wonder if these 'rationalists' are all under
           | age 40, they havent seen themselves fooled yet.
        
             | cogman10 wrote:
             | It's every bit a proto religion. And frankly quite
             | reminiscent of my childhood faith.
             | 
             | It has a priesthood that speaks for god (quantum). It has
             | ideals passed down from on high. It has presuppositions
             | about how the universe functions which must not be
             | questioned. And it's filled with people happy that they are
             | the chosen ones and they feel sorry for everyone that isn't
             | enlightened like they are.
             | 
             | In the OPs article, I had to chuckle a little when they
             | started the whole thing off by mentioning how other
             | Rationalists recognized them as a physicist (they aren't).
             | Then they proceeded to talk about "quantum cloning theory".
             | 
             | Therein is the problem. A bunch of people vociferously
             | speaking outside their expertise confidently and being
             | taken seriously by others.
        
             | mitthrowaway2 wrote:
             | This seems like exactly the opposite of everything I've
             | read from the rationalists. They even called their website
             | "less wrong" to call attention to knowing that they are
             | probably still wrong about things, rather than right about
             | everything. A lot of their early stuff is about cognitive
             | biases. They have written a lot about "noticing confusion"
             | when their foundational beliefs turn out to be wrong.
             | There's even an essay about what it would feel like to be
             | wrong about something as fundamental as 2+2=4.
             | 
             | Do you have specific examples in mind? (And not to put too
             | fine a point on it, do you think there's a chance that
             | _you_ might be wrong about this assertion? You 've
             | expressed it very confidently...)
        
               | astrange wrote:
               | They're wrong about how to be wrong, because they think
               | they can calculate around it. Calling yourself "Bayesian"
               | and calling your beliefs "priors" is so irresponsible it
               | erases all of that; it means you don't take
               | responsibility if you have silly beliefs, because you
               | don't even think you hold them.
        
         | js8 wrote:
         | > The people involved all seem very... Full of themselves ?
         | 
         | Kinda like Mensa?
        
           | parpfish wrote:
           | When I was a kid I wanted to be in Mensa because being smart
           | was a big part of my identity and I was constantly seeking
           | external validation.
           | 
           | I'm so glad I didn't join because being around the types of
           | adults that make being smart their identity surely would have
           | had some corrosive effects
        
             | NoGravitas wrote:
             | Personally, I subscribe to Densa, the journal of the Low-IQ
             | Society.
        
               | GLdRH wrote:
               | This month: Is Brawno really what plants crave?
        
               | gadders wrote:
               | I love colouring in my issue every month.
        
             | GLdRH wrote:
             | I didn't meet anyone who seemed arrogant.
             | 
             | However I'm always surprised how much some people want to
             | talk about intelligence. I mean, it's the common ground of
             | the group in this case, but still.
        
         | baxtr wrote:
         | My main problem with the movement is their emphasis on
         | Bayesianism in conjunction with an almost total neglect of
         | Popperian epistemology.
         | 
         | In my opinion, there can't be a meaningful distinction made
         | between rational and irrational without Popper.
         | 
         | Popper injects an epistemic humility that Bayesianism, taken
         | alone, can miss.
         | 
         | I think that aligns well with your observation.
        
           | agos wrote:
           | is epidemiology a typo for epistemology or am I missing
           | something?
        
             | baxtr wrote:
             | Yes, thx, fixed it.
        
           | kurtis_reed wrote:
           | So what's the difference between Bayesianism and Popperian
           | epistemology?
        
             | uniqueuid wrote:
             | Popper requires you to posit null hypotheses to falsify
             | (although there are different schools of thought on what
             | exactly you need to specify in advance [1]).
             | 
             | Bayesianism requires you to assume / formalize your prior
             | belief about the subject under investigation and updates it
             | given some data, resulting in a posterior belief
             | distribution. It thus does not have the clear distinctions
             | of frequentism, but that can also be considered an
             | advantage.
             | 
             | [1] https://web.mit.edu/hackl/www/lab/turkshop/readings/gig
             | erenz...
        
           | kragen wrote:
           | Hmm, what epistemological propositions of Popper's do you
           | think they're missing? To the extent that I understand the
           | issues, they're building on Popper's epistemology, but by
           | virtue of having a more rigorous formulation of the issues,
           | they resolve some of the apparent contradictions in his
           | views.
           | 
           | Most of Popper's key points are elaborated on at length in
           | blog posts on LessWrong. Perhaps they got something wrong? Or
           | overlooked something major? If so, what?
           | 
           | (Amusingly, you seem to have avoided making any falsifiable
           | claims in your comment, while implying that you could easily
           | make many of them...)
        
             | baxtr wrote:
             | _> Popper's falsificationism - this is the old philosophy
             | that the Bayesian revolution is currently dethroning._
             | 
             | https://www.yudkowsky.net/rational/bayes
             | 
             | These are the kind of statements I'm referring to. Happy to
             | be falsified btw :) that's how we learn.
             | 
             | Also note that Popper never called his theory
             | _falsificationism_.
        
           | uniqueuid wrote:
           | The counterpoint here is that in practice, humility is only
           | found in the best of frequentists, whereas the rest succumb
           | to hubris (i.e. the cult of irrelevant precisions).
        
           | the_af wrote:
           | What really confuses me is that many in this so called
           | "rationalist" clique discuss Bayesianism as an "ism", some
           | sort of sacred, revered truth. They talk about it in mystical
           | terms, which matches the rest of their cult-like behavior.
           | What's the deal with that?
        
             | mitthrowaway2 wrote:
             | That's specific to Yudkowsky, and I think that's just
             | supposed to be humor. A lot of people find mathematics very
             | dry. He likes to dress it up as "what if we pretend math is
             | some secret revered knowledge?".
        
               | jrm4 wrote:
               | Yeah but these feels like "more truth is said in jest etc
               | etc"
        
               | smitty1110 wrote:
               | The best jokes all have a kernel of truth at their core,
               | but I think a lot of Yudkowsky's acolytes missed the
               | punch line.
        
           | empiko wrote:
           | I actually think that their main problem is the belief that
           | they can learn everything about the world by reading stuff on
           | the Web. You can't understand everything by reading blogs and
           | books, in the end, some things are best understood when you
           | are on the ground. Unironically, they should go touch the
           | grass.
           | 
           | One example for all. It was claimed that a great rationalist
           | policy is to distribute treated mosquito nets to 3rd-world-
           | ers to help eradicate malaria. On the ground, the same nets
           | were commonly used for fishing and other activities,
           | polluting the environment with insecticides. Unfortunately,
           | rationalists forgot to ask people that live with mosquitos
           | what they would do with such nets.
        
             | noname120 wrote:
             | > On the ground, the same nets were commonly used for
             | fishing and other activities, polluting the environment
             | with insecticides.
             | 
             | Could you recommend an article to learn more about this?
        
         | ummonk wrote:
         | Rationalists have always rubbed me the wrong way too but your
         | argument against AI doomerism is weird. If you care about first
         | principles, how about the precautionary principle? "Maybe it's
         | actually benign" is not a good argument for moving ahead with
         | potentially world ending technology.
        
           | xyzzy123 wrote:
           | I don't think "maybe it's benign" is where anti doomers are
           | coming from, more like, "there are also costs to not doing
           | things".
           | 
           | The doomer utilitarian arguments often seem to involve some
           | sort of infinity or really large numbers (much like EAs)
           | which result in various kinds of philosophical mugging.
           | 
           | In particular, the doomer plans invariably result in some
           | need for draconian centralised control. Some kind of body or
           | system that can tell everyone what to do with (of course)
           | doomers in charge.
        
             | XorNot wrote:
             | It's just the slippery-slope fallacy: if X then _obviously_
             | Y will follow, and there will be no further decisions,
             | debate or time before it does.
        
               | parpfish wrote:
               | One of my many peeves has been the way that people misuse
               | the term "slippery slope" as evidence for their stance.
               | 
               | "If X, then surely Y will follow! It's a slippery slope!
               | We can't allow X!"
               | 
               | They call out the name of the fallacy they are committing
               | BY NAME and think that it somehow supports their
               | conclusion?
        
               | gausswho wrote:
               | I rhetorically agree it's not a good argument, but its
               | use as a cautionary metaphor predates its formalization
               | as a logical fallacy. It's summoning is not proof in and
               | of itself (i.e. the 1st amendment). It suggests a concern
               | rather than demonstrates. It's lazy, and a good habit to
               | rid oneself of. But its presence does not invalidate the
               | argument.
        
               | adastra22 wrote:
               | Yes, it does. The problem with the slippery slope is that
               | the slope itself is not argued for. You haven't shown the
               | direct, inescapable causal connection between the current
               | action and the perceived very negative future outcome.
               | You've just stated/assumed it. That's what the fallacy
               | is.
        
           | IshKebab wrote:
           | He wasn't saying "maybe it's actually going to be benign" is
           | an argument for moving ahead with potentially world ending
           | technology. He was saying that it might end up being benign
           | and rationalists who say it's _definitely_ going to be the
           | end of the world are wildly overconfident.
        
             | noname120 wrote:
             | No rationalist claims that it's "_definitely_ going to be
             | the end of the world". In fact they estimate to less than
             | 30% the chance that AI becomes an existential risk by the
             | end of the century.
        
               | nradov wrote:
               | Who is "they" exactly, and how can they estimate the
               | probability of a future event based on zero priors and a
               | total lack of scientific evidence?
        
           | eviks wrote:
           | But not accepting this technology could also be potentially
           | world ending, especially if you want to start many new wars
           | to achieve that, so caring about the first principles like
           | peace and anti-ludditism brings us back to the original "real
           | lack of humility..."
        
           | nradov wrote:
           | The precautionary principle is stupid. If people had followed
           | it then we'd still be living in caves.
        
             | ummonk wrote:
             | I take it you think the survivorship bias principle and the
             | anthropic principle are also stupid?
        
               | nradov wrote:
               | Don't presume to know what I think.
        
           | adastra22 wrote:
           | The precautionary principle does active harm to society
           | because of opportunity costs. All the benefits we have reaped
           | since the enlightenment have come from proactionary
           | endeavorers, not precautionary hesitation.
        
         | NoGravitas wrote:
         | I've always seen the breathless Singularitarian worrying about
         | AI Alignment as a smokescreen to distract people from thinking
         | clearly about the more pedestrian hazards of AI that isn't
         | self-improving or superhuman, from algorithmic bias, to policy-
         | washing, to energy costs and acceleration of wealth
         | concentration. It also leads to so-called longtermism -
         | discounting the benefits of solving current real problems and
         | focusing entirely on solving a hypothetical one that you think
         | will someday make them all irrelevant.
        
           | philipov wrote:
           | yep, the biggest threat posed by AI comes from the
           | capitalists who want to own it.
        
             | parpfish wrote:
             | Or the propagandists that use it
        
               | bilbo0s wrote:
               | They won't be allowed to use it unless they serve the
               | capitalists who own it.
               | 
               | It's not social media. It's a model the capitalists train
               | _and_ own. Best the rest of us will have access to are
               | open source ones. It 's like the difference between
               | trying to go into court backed by google searches as
               | opposed to Lexis/Nexis. You're gonna have a bad day with
               | the judge.
               | 
               | Here's hoping the open source stuff gets trained on
               | quality data rather than reddit and 4chan. Given how the
               | courts are leaning on copyright, and lack of vetted data
               | outside copyright holder remit, I'm not sanguine about
               | the chances of parity long term.
        
               | thrance wrote:
               | The propagandists serve the capitalists, so it's all the
               | same.
        
             | impossiblefork wrote:
             | I actually think the people developing AI might well not
             | get rich off it.
             | 
             | Instead, unless there's a single winner, we will probably
             | see the knowledge on how to train big LLMs and make them
             | perform well diffuse throughout a large pool of AI
             | researchers, with the hardware to train models reasonably
             | close to the SotA becoming more quite accessible.
             | 
             | I think the people who will benefit will be the owners of
             | ordinary but hard-to-dislodge software firms, maybe those
             | that have a hardware component. Maybe firms like Apple,
             | maybe car manufacturers. Pure software firms might end up
             | having AI assisted programmers as competitors instead,
             | pushing margins down.
             | 
             | This is of course pretty speculative, and it's not reality
             | yet, since firms like Cursor etc. have high valuations, but
             | I think this is what you'd get from the probably pressure
             | if it keeps getting better.
        
               | cogman10 wrote:
               | It smacks of a goldrush. The winners will be the people
               | selling shovels (nVidia) and housing (AWS). It may also
               | be the guides showing people the mountains (Cursor,
               | OpenAI, etc).
               | 
               | I suspect you'll see a few people "win" or strike it rich
               | with AI, the vast majority will simply be left with a big
               | bill.
        
               | nradov wrote:
               | When railroads were first being built across the
               | continental USA, those companies also had high valuations
               | (for the time). Most of them ultimately went bankrupt or
               | were purchased for a fraction of their peak valuation.
               | But the tracks remained, and many of those routes are
               | still in use today.
        
               | bilbo0s wrote:
               | Just checked.
               | 
               | The problem is the railroads were purchased by the
               | winners. Who turned out to be the existing winners. Who
               | then went on to continue to win.
               | 
               | On the one hand, I guess that's just life here in
               | reality.
               | 
               | On the other, man, reality sucks sometimes.
        
               | baq wrote:
               | That's capitalism for you - loser's margins were winner's
               | opportunities.
               | 
               | Imagine if they were bought by losers.
        
               | pavlov wrote:
               | And today the railroad system in the USA sucks compared
               | to other developed countries and even China.
               | 
               | It turns out that boom-and-bust capitalism isn't great
               | for building something that needs to evolve over
               | centuries.
               | 
               | Perhaps American AI efforts will one day be viewed
               | similarly. "Yeah, they had an early rush, lots of
               | innovation, high valuations, and robber barons competing.
               | Today it's just stale old infra despite the high-energy
               | start."
        
               | nradov wrote:
               | Nope. USA #1:
               | 
               | https://www.worldatlas.com/articles/highest-railway-
               | cargo-tr...
        
               | lupusreal wrote:
               | America's passenger rail sucks, it couldn't compete with
               | airplanes and every train company got out of the
               | business, abandoning it to the government. But America
               | does have a great deal of freight rail which sees a lot
               | of use (much more than in Europe, I don't know how it
               | compares to China though.)
        
               | lazyasciiart wrote:
               | One reason the passenger service sucks is that the
               | freight rail companies own the tracks, and are happy to
               | let a passenger train sit behind a freight train for a
               | couple hours waiting for space in the freight yard so it
               | can get out of the way.
        
               | lupusreal wrote:
               | The root cause is Americans, generally, prefer any mode
               | of transit other than rail, so passenger rail isn't
               | profitable, so train companies naturally prioritize
               | freight.
               | 
               | For what it's worth, I like traveling by train and do so
               | whenever I can, but I'm an outlier. Most Americans look
               | at the travel times and laugh at the premise of choosing
               | a train over a plane. And when I say they look at the
               | travel times, I don't mean they actually bother to look
               | up train routes. They just know that airplanes are
               | several times faster. Delays suffered by trains never get
               | factored into the decision because trains aren't taken
               | seriously in the first place.
        
               | 9rx wrote:
               | _> freight rail companies own the tracks_
               | 
               | Humans are also freight, of course. It is not like the
               | rail companies really care about what kind of fright is
               | on the trains, so long as it is what the customer
               | considers most important (read: most profitable). Humans
               | are deprioritized exactly because they aren't considered
               | important by the customer, which is to say that the
               | customer, who is also the freight in this case, doesn't
               | really want to be on a train in the first place. The
               | customer would absolutely ensure priority (read: pay
               | more, making it known that they are priority) if they
               | wanted to be there.
               | 
               | I understand the train geeks on the internet find it hard
               | to believe that not everyone loves trains like they do,
               | but the harsh reality is that the average American Joe
               | prefers other means of transportation. Should that change
               | in the future, the rail network will quickly accommodate.
               | It has before!
        
               | _DeadFred_ wrote:
               | China hasn't shown that their railroad buildout will
               | work. My understanding is they currently aren't making
               | enough return to payoff debt, yet alone plan for future
               | maintenance. Historically the command economy type stuff
               | looks great in the early years, it's later on we see if
               | that is reality.
               | 
               | You are comparing USA today to the robber baron phase,
               | whose to say China isn't in the same phase? Lots of money
               | being thrown at new railroads and you have Chinese
               | leaders and best and management leaders chasing that
               | money. When happens when it goes low budget/maintenance
               | mode?
        
               | entropicdrifter wrote:
               | The USA today is _in_ a robber baron phase. We only
               | briefly left it for about 2 generations due to the rise
               | of labor power in the late 1800s /early 1900s. F.D.R. was
               | the compromise president put into place to placate labor
               | and prevent a socialist revolution.
        
               | 9rx wrote:
               | _> And today the railroad system in the USA sucks
               | compared to other developed countries and even China._
               | 
               | Nonsense. The US has the largest freight rail system in
               | the world, and is considered to have the most efficient
               | rail system in the world to go along with it.
               | 
               | There isn't much in the way of passenger service,
               | granted, but that's because people in the US aren't,
               | well, poor. They can afford better transportation
               | options.
               | 
               |  _> It turns out that boom-and-bust capitalism isn't
               | great for building something that needs to evolve over
               | centuries._
               | 
               | It initially built out the passenger rail just fine, but
               | then evolution saw better options come along. Passenger
               | rail disappeared because it no longer served a purpose.
               | It is not like, say, Japan where the median household
               | income is approaching half that of _Mississippi_ and they
               | hold on to rail because that 's what is affordable.
        
               | Detrytus wrote:
               | > There isn't much in the way of passenger service,
               | granted, but that's because people in the US aren't,
               | well, poor. They can afford better transportation
               | options.
               | 
               | This is so misguided view... Trains (when done right)
               | aren't "for the poor", they are great transportation
               | option, that beats both airplanes and cars. In Poland,
               | which isn't even close to the best, you can travel
               | between big cities with speeds above 200km/h, and you can
               | use regional rail for your daily commute, both those
               | options being very comfortable and convenient, much more
               | convenient than traveling by car.
        
               | 9rx wrote:
               | Poland is approximately the same geographical size as
               | Nevada. In the US, "between cities" is more like New York
               | to Las Vegas, not Las Vegas to... uh, I couldn't think of
               | another city in Nevada off the top of my head. What
               | under-serviced route were you thinking of there?
               | 
               | What gives you the idea that rail would be preferable to
               | flying for the NYC to LAS route if only it existed? Even
               | as the crow flies it is approximately 4,000 km, meaning
               | that at 200 km/h you are still looking at around 20 hours
               | of travel in an ideal case. Instead of just 5 hours by
               | plane. If you're poor an additional 15 hours wasted might
               | not mean much, but when time is valuable?
        
               | danans wrote:
               | > In the US, "between cities" is more like New York to
               | Las Vegas, not Las Vegas to... uh, I couldn't think of
               | another city in Nevada off the top of my head. What
               | under-serviced route were you thinking of there?
               | 
               | Why would you constrain the route to within a specific
               | state? In fact, right now a high-speed rail line is being
               | planned between Las Vegas and LA.
               | 
               | But outside of Nevada, there are many equivalent distance
               | routes in the US between major population centers,
               | including:
               | 
               | Chicago/Detroit
               | 
               | Dallas/Houston
               | 
               | LA/SF
               | 
               | Atlanta/Charlotte
        
               | AnimalMuppet wrote:
               | _Is_ being build? Um, not quite. Is being planned. Is
               | arranging for right-of-way. But to the best of my
               | knowledge, actual construction has not started.
        
               | Detrytus wrote:
               | Also, if you go all in and build something equivalent to
               | Chinese bullet trains (that go with speeds up to 350km/h)
               | you could do for example NY to Chicago in 3.5 hours, or
               | even NY to Miami in 6 hours :-D (I know, not very
               | realistic)
        
               | gwd wrote:
               | Not sure how we got from Scott A being a rationalist to
               | trains, but since we're here, I want to say:
               | 
               | I've taken a Chinese train from Zhengzhou, in central
               | China, to Shenzhen, and it was _fantastic_. Cheap,
               | smooth, fast, lots of legroom, easy to get on and off or
               | walk around to the dining car. And, there 's a thing
               | where boiling hot water is available, so everyone brings
               | instant noodle packs of every variety to eat on the
               | train.
               | 
               | Can't even imagine what the US would be like if we had
               | that kind of thing.
        
               | 9rx wrote:
               | _> In fact, right now a high-speed rail line is being
               | planned between Las Vegas and LA._
               | 
               | Right now and since 1979!
               | 
               | I'll grant you that people love to plan, but it turns out
               | that they don't love putting on their boots and picking
               | up a shovel nearly as much.
               | 
               |  _> But outside of Nevada, there are many equivalent
               | distance routes in the US between major population
               | centers, including_
               | 
               | And there is nothing stopping those lines from being
               | built other than the lack of will to do it. As before,
               | the will doesn't exist because better options exist.
        
               | nradov wrote:
               | There are a lot more obstacles than lack of will. There
               | are also property rights, environmental reviews,
               | availability of skilled workers, and lack of capital. HN
               | users sometimes have this weird fantasy that with enough
               | political will it's possible to make enormous changes but
               | that's simply not how things operate in a republic with a
               | dual sovereignty system.
        
               | 9rx wrote:
               | _> There are also property rights, environmental reviews,
               | availability of skilled workers, and lack of capital._
               | 
               | There is no magic in this world like you seem to want to
               | pretended. All of those things simply boil down to
               | people. Property rights only exist because people say
               | they do, environmental reviews only exist because people
               | say they do, skilled workers are, well, literally people,
               | and the necessary capital is already created. If the
               | capital is being directed to other purposes, it is only
               | because people decided those purposes are more important.
               | All of this can change if the people want it to.
               | 
               |  _> HN users sometimes have this weird fantasy that with
               | enough political will it 's possible to make enormous
               | changes but that's simply not how things operate in a
               | republic with a dual sovereignty system._
               | 
               | Hell, the republic and dual sovereignty system itself
               | only exists because that's what people have decided upon.
               | Believe it or not, it wasn't enacted by some mythical
               | genie in the sky. The people can change it all on a whim
               | if the will is there.
               | 
               | The will isn't there of course, as there is no reason for
               | the will to be there given that there are better options
               | anyway, but _if_ the will was there it 'd be done already
               | (like it already is in a few corners of the country where
               | the will was present).
        
               | Kon-Peki wrote:
               | > Chicago/Detroit
               | 
               | There has been continuous regularly scheduled passenger
               | service between Chicago and Detroit since before the
               | Civil War. The current Amtrak Wolverine runs 110 MPH (180
               | KPH) for 90% of the route, using essentially the same
               | trainset that Brightline plans to use.
        
               | danans wrote:
               | Fair point. Last time I took that train (mid 1990s) it
               | didn't run to Pontiac or Troy, and I recall there being
               | very infrequent service. A far as I know, it's not the
               | major mode of passenger transit between Detroit and
               | Chicago. Cars are. That might be because of the serious
               | lack of last-mile transit connectivity in the Detroit
               | area.
        
               | Kon-Peki wrote:
               | Cars are definitely the major mode. Lots of quick
               | flights, too.
               | 
               | They've made a lot of investments since the 1990s. It's
               | much improved, though perhaps not as nice as during the
               | golden years when it was a big part of the New York
               | Central system (from the 1890s to the 1960s they had
               | daily trains that went Boston/NYC/Buffalo/Detroit/Chicago
               | through Canada from Niagara Falls to Windsor).
               | 
               | During the first Trump administration, Amtrak announced a
               | route that would go
               | Chicago/Detroit/Toronto/Montreal/Quebec City using that
               | same rail tunnel underneath the Detroit River. It was
               | supposed to start by 2030. We'll see if it happens.
        
               | fragmede wrote:
               | If you can't think of another city in Nevada off the top
               | of your head, are you even American? (Reno.)
               | 
               | Anyway, New York to Las Vegas spans most of the US. There
               | are plenty of routes in the US where rail would make
               | sense. Between Boston, New Haven, New York City,
               | Philadelphia, Baltimore, and Washington, D.C. Which has
               | the Amtrak Acela. Or perhaps Miami to Orlando. Which has
               | a privately funded high speed rail connection called
               | Brightline that runs at 200 km/h who's ridership was
               | triple what had been expected at launch.
        
               | 9rx wrote:
               | _> are you even American?_
               | 
               | I am, thankfully, not.
               | 
               |  _> Which has a privately funded high speed rail
               | connection called Brightline that runs at 200 km /h_
               | 
               | Which proves that when the will is there, it will be
               | done. The only impediment in other places is simply the
               | people not wanting it. If they wanted it, it would
               | already be there.
               | 
               | The US has been here before. It built out a pretty good,
               | even great, passenger rail network a couple of centuries
               | ago when the people wanted it. It eventually died out
               | simply because the people didn't want it anymore.
               | 
               | If they want it again in the future, it will return. But
               | as for the moment...
        
               | ambicapter wrote:
               | Yeah, a cheap transportation option is a terrible thing
               | to have... /s
        
               | 9rx wrote:
               | It's not that it would be terrible, but in the real world
               | people are generally lazy and will only do what they
               | actually want to see happen. Surprisingly, we don't yet
               | have magical AI robots that autonomously go around
               | turning all imagined ideas into reality without the need
               | for human grit.
               | 
               | Since nobody really wants passenger rail in the US, they
               | don't put in the effort to see that it exists (outside of
               | some particular routes where they do want it). In many
               | other countries, people do want board access to passenger
               | rail (because that's all they can afford), so they put in
               | the effort to have it.
               | 
               | ~200 years ago the US did want passenger rail, they put
               | in the work to realize it, and it did have a pretty good
               | passenger rail network at the time given the period. But,
               | again, better technology came along, so people stopped
               | maintaining/improving what was there. They could do it
               | again if they wanted to... But they don't.
        
               | lazyasciiart wrote:
               | What an ironic side thread in a conversation about people
               | who are confidently ignorant.
        
               | 9rx wrote:
               | I am not sure the irony works as told unless software and
               | people are deemed to be the same thing. But what is there
               | to suggest that they are?
        
               | impossiblefork wrote:
               | I think it's unlikely that AI efforts will go as
               | railroads have. I think being an AI foundation model
               | company is more like being an airplane builder than like
               | a railway company, since you develop your technology.
        
               | etblg wrote:
               | Plenty of those that similarly went bankrupt over the
               | years, and now the USA mostly has Boeing that's reached a
               | set of continual crises and being propped up by the
               | government.
        
               | kridsdale1 wrote:
               | Exact same thing happened with fiber optic cable layers
               | in the late 1990s. On exactly the same routes!
        
               | cguess wrote:
               | It's because the land-rights were more valuable than the
               | steel rails, which the fiber optic companies bought up.
        
           | tuveson wrote:
           | My feeling has been that it's a lot of people that work on
           | B2B SaaS that are sad they hadn't gotten the chance to work
           | on the Manhattan Project. Be around the smartest people in
           | your field. Contribute something significant (but dangerous!
           | And we need to talk about it!) to humanity. But yeah computer
           | science in the 21st century has not turned out to be as
           | interesting as that. Maybe just as important! But Jeff Bezos
           | important, not Richard Feynman important.
        
             | HPsquared wrote:
             | "Overproduction of elites" is the expression.
        
           | thom wrote:
           | The Singularitarians were breathlessly worrying 20+ years
           | ago, when AI was absolute dogshit - Eliezer once stated that
           | Doug Lenat was incautious in launching Eurisko because it
           | could've gone through a hard takeoff. I don't think it's just
           | an act to launder their evil plans, none of which at the time
           | worked.
        
             | salynchnew wrote:
             | Yeah, people were generally terrified of this stuff back
             | before you could make money off of it.
        
             | notahacker wrote:
             | Fair. OpenAI _totally_ use those arguments to launder their
             | plans, but that saga has been more Silicon Valley
             | exploiting longstanding rationalist beliefs for PR purposes
             | than rationalists getting rich...
             | 
             | Eliezer did once state his intentions to build "friendly
             | AI", but seems to have been thwarted by his first order
             | reasoning about how AI decision theory _should_ work being
             | more important to him than building something that actually
             | did work, even when others figured out the latter bit.
        
           | NoMoreNicksLeft wrote:
           | >s a smokescreen to distract people from thinking clearly
           | about the more pedestrian hazards of AI that isn't self-
           | improving or superhuman,
           | 
           | Anything that can't be self-improving or superhuman almost
           | certainly isn't worthy of the moniker "AI". A true AI will be
           | born into a world that has already unlocked the principles of
           | intelligence. Humans in that world would be capable
           | themselves of improving AI (slowly), but the AI itself will
           | (presumably) run on silicon and be a quick thinker. It will
           | be able to self-improve, rapidly at first, and then more
           | rapidly as its increased intelligence allows for even quicker
           | rates of improvement. And if not superhuman initially, it
           | would soon become so.
           | 
           | We don't even have anything resembling real AI at the moment.
           | Generative models are probably some blind alley.
        
             | danans wrote:
             | > We don't even have anything resembling real AI at the
             | moment. Generative models are probably some blind alley.
             | 
             | I think that the OP's point was that it doesn't matter
             | whether it's "real AI" or not. Even if it's just a
             | glorified auto-correct system, it's one that has the clear
             | potential to overturn our information/communication systems
             | and our assumptions about individuals' economic value.
        
               | NoMoreNicksLeft wrote:
               | If that has the potential to ruin economies, then the
               | economic rot is so much more profound than anyone (me
               | included) ever realized.
        
               | jay_kyburz wrote:
               | I think when the GP says "our assumptions about
               | individuals' economic value." they mean half the
               | workforce becoming unemployed because the auto corrector
               | can do it cheaper.
               | 
               | That's going to be a swift kick to your economy, no
               | matter how strong.
        
         | James_K wrote:
         | Implicit in calling yourself a rationalist is the idea that
         | other people are not thinking rationally. There are a lot of
         | "we see the world as it really is" ideologies, and you can only
         | ascribe to one if you have a certain sense of self-assuredness
         | that doesn't lend itself to healthy debate.
        
         | resters wrote:
         | Not meaning to be too direct, but you are misinterpreting a
         | _lot_ about rationalists.
         | 
         | In my view, rationalists are often "Bayesian" in that they are
         | constantly looking for updates to their model. Consider that
         | the default approach for most humans is to believe a variety of
         | things and to feel indignant if someone holds differing views
         | (the adage _never discuss religion or politics_ ). If one
         | adopts the perspective that their own views might be wrong, one
         | must find a balance between confidently acting on a belief and
         | being open to the belief being overturned or debunked (by
         | experience, by argument, etc.).
         | 
         | Most rationalists I've met _enjoy_ the process of updating or
         | discarding beliefs in favor of ones they consider more correct.
         | But to be fair to one 's own prior attempts at rationality, one
         | should try reasonably hard to defend one's _current_ beliefs so
         | that they can be fully and soundly replaced if necessary,
         | without leaving any doubt that they were insufficiently
         | supported, etc.
         | 
         | To many people (the kind of people who _never discuss religion
         | or politics_ ) all this is very uncomfortable and reveals that
         | rationalists are egotistical and lacking in humility. Nothing
         | could be further from the truth. It takes _tremendous_ humility
         | to assume that one 's own beliefs are quite possibly wrong. The
         | very name of Eliezer's blog "Less Wrong" makes this humility
         | quite clear. Scott Alexander is also very open with his priors
         | and known biases / foci, and I view his writing as primarily
         | focusing on big picture epistemological patterns that most
         | people end up overlooking because most people are busy, etc.
         | 
         | One final note about the AI-dystopianism common among
         | rationalists -- we really don't know yet what the outcome will
         | be. I personally am a big fan of AI, but we as humans do not
         | remotely understand the social/linguistic/memetic environment
         | well enough to know for sure how AI will impact our society and
         | culture. My guess is that it will amplify rather than mitigate
         | differences in innate intelligence in humans, but that's a
         | tangent.
         | 
         | I think to some, the rationalist movement feels like historical
         | "logical positivist" movements that were reductionist and
         | socially darwinian. While it is obvious to me that the
         | rationalist movement is nothing of the sort, some people view
         | the word "rationalist" as itself full of the implication that
         | self-proclaimed rationalists consider themselves superior at
         | reasoning. In fact they simply employ a heuristic for
         | considering their own rationality over time and attempting to
         | maximize it -- this includes listening to "gut feelings" and
         | hunches, etc,. in case you didn't realize.
        
           | matthewdgreen wrote:
           | My impression is that many rationalists enjoy believing that
           | they update their beliefs, but in practice they're human and
           | just as attached to preconceived notions as anyone else. But
           | if you go around telling everyone that updating is your
           | super-power, you're going to be a lot less humble about your
           | own failures to do so.
           | 
           | If you want to see how human and tribal rationalists are, go
           | criticize the movement as an outsider. Or try to write a
           | mildly critical NYT piece about them and watch how they
           | react.
        
             | thom wrote:
             | Yes, I've never met anyone who stated they have "strong
             | opinions, weakly held" who wasn't A) some kind of arsehole
             | and B) lying.
        
               | zbentley wrote:
               | I've met a few people who walked that walk without being
               | assholes ... to others. They tended to have a fairly
               | intense amount of self criticism/self hatred, though.
               | That was more palatable than ego, to be sure, but isn't
               | likely broadly applicable.
        
               | mitthrowaway2 wrote:
               | Out of how many such people that you have met?
        
           | ajkjk wrote:
           | not to be too cynical here, but I would say that the most-apt
           | description of the rationalists is that they are people who
           | would _say_ they are constantly looking for updates to their
           | models. But that they are not necessarily doing it
           | appreciably more than anyone else is. They will do it freely
           | on unimportant things---they tend to be smart people who view
           | the world intellectually and so they are free to toss or keep
           | factual beliefs about things, of which they have many, with
           | little fanfare, and sure, they get points for that. But they
           | are as rooted in their moral beliefs as anybody else is.
           | Maybe more than other people since they have such a strong
           | intellectual edifice that justifies _not_ changing their
           | minds, because they believe that their beliefs follow from
           | nearly irrefutable calculations.
        
             | resters wrote:
             | You're generalizing that all self-proclaimed rationalists
             | are hypocrites and heavily biased? I mean, regardless of
             | whether or not that is true, what is the point of making
             | such a broad generalization? Strange!
        
               | ajkjk wrote:
               | um.... because I think it's true and relevant? I'm
               | describing a pattern I have observed over many years. It
               | is of course my opinion (and are not a universal
               | statement, just what I believe to be a common
               | phenomenon).
        
           | jrflowers wrote:
           | It seems that you are conflating theoretical rationalists
           | with the actual real-life rationalists that write stuff like
           | 
           | >The quantum physicist who's always getting into arguments on
           | the Internet, and who's essentially always right
           | 
           | "Guy Who Is Always Right" as a role in a social group is a
           | terrible target, yet it somehow seems like what rationalists
           | are aiming for every time I read any of their blog posts
        
         | benreesman wrote:
         | Any time people engage in some elaborate exercise and it
         | arrives at: "me and people like me should be powerful and not
         | pay taxes and stuff" the reason for making the argument is not
         | a noble one, the argument probably has a bunch of tricks and
         | falsehoods in it, and there's never really any way to extract
         | anything useful, greed and grandiosity are both fundamentally
         | contaminative processes.
         | 
         | These folks have a bunch of money because we allowed them to
         | privatize the commons of 20th century R&D mostly funded by the
         | DoD and done at places like Bell Labs, Thiel and others saw
         | that their interests had become aligned with more traditional
         | arch-Randian goons, and they've captured the levers of power
         | damn near up to the presidency.
         | 
         | This has quite predictably led to a real mess that's getting
         | worse by the day, the economic outlook is bleak, wars are
         | breaking out or intensifying left right and center, and all of
         | this traces a very clear lineage back to allowing a small group
         | of people privatize a bunch of public good.
         | 
         | It was a disaster when it happened in Russia in the 90s and its
         | a disaster now.
        
         | mitthrowaway2 wrote:
         | > They don't really ever show a sense of "hey, I've got a
         | thought, maybe I haven't considered all angles to it, maybe I'm
         | wrong - but here it is".
         | 
         | Aren't these the people who started the trend of writing things
         | like " _epistemic status: mostly speculation_ " on their blog
         | posts? And writing essays about the dangers of overconfidence?
         | And measuring how often their predictions turn out wrong? And
         | maintaining webpages titled "list of things I was wrong about"?
         | 
         | Are you sure you're not painting this group with an overly-
         | broad brush?
        
           | hiddencost wrote:
           | They're behind Anthropic and were behind openai being a
           | nonprofit. They're behind the friendly AI movement and
           | effective altruism.
           | 
           | They're responsible for funneling huge amounts of funding
           | away from domain experts (effective altruism in practice
           | means "Oxford math PhD writes a book report about a social
           | sciences problem they've only read about and then defunds all
           | the NGOs").
           | 
           | They're responsible for moving all the AI safety funding away
           | from disparate impact measures to "save us from skynet"
           | fantasies.
        
             | mitthrowaway2 wrote:
             | I don't see how this is a response to what I wrote. Can you
             | explain?
        
               | fatbird wrote:
               | I think GP is saying that their epistemic humility is a
               | pretense, a pose. They do a lot of throat clearing about
               | quantifying their certainty and error checking
               | themselves, and then proceed to bring about very
               | consequential outcomes anyway for absurd reasons with
               | predictable side effects that they _should_ have
               | considered but didn 't.
        
               | notahacker wrote:
               | Yeah. It's not that they _never_ express uncertainty so
               | much as they like to express uncertainty as arbitrarily
               | precise and convenient-looking expected value
               | calculations which often look like far more of a
               | rhetorical tool to justify their preferences (I 've
               | accounted for the uncertainty and even given a credence
               | as low as 14.2% I'm still right!) than a decision making
               | heuristic...
        
           | bakuninsbart wrote:
           | Weirdly enough, both can be true. I was tangentially involved
           | in EA in the early days, and have some friends who were more
           | involved. Lots of interesting, really cool stuff going on,
           | but there was always latent insecurity paired with
           | overconfidence and elitism as is typical in young nerd
           | circles.
           | 
           | When big money got involved, the tone shifted a lot. One
           | phrase that really stuck with me is "exceptional talent".
           | Everyone in EA was suddenly talking about finding, involving,
           | hiring exceptional talent at a time where there was more than
           | enough money going around to give some to us mediocre people
           | as well.
           | 
           | In the case of EA in particular circlejerks lead to idiotic
           | ideas even when paired with rationalist rhetoric, so they
           | bought mansions for team building (how else are you getting
           | exceptional talent), praised crypto (because they are funding
           | the best and brightest) and started caring a lot about shrimp
           | welfare (no one else does).
        
             | mitthrowaway2 wrote:
             | I don't think this validates the criticism that "they _don
             | 't really ever show a sense of[...] maybe I'm wrong_".
             | 
             | I think that sentence would be a fair description of
             | certain individuals in the EA community, especially SBF,
             | but that is not the same thing as saying that rationalists
             | _don 't ever_ express epistemic uncertainty, when on
             | average they spend more words on that than just about any
             | other group I can think of.
        
             | ToValueFunfetti wrote:
             | >they bought mansions for team building
             | 
             | They bought one mansion to host fundraisers with the super-
             | rich, which I believe is an important correction. You might
             | disagree with that reasoning as well, but it's definitely
             | not as described.
        
               | notahacker wrote:
               | Wytham Abbey was bought mainly to host internal workshops
               | and "retreats", as outlined by the people who bought it h
               | ttps://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53b
               | G/... https://forum.effectivealtruism.org/posts/yggjKEeeh
               | snmMYnZd/...
               | 
               | As far as I know it's never hosted an impress-the-
               | oligarch fundraiser, which as you say would at least have
               | a logic behind it[1] even if it might seem distasteful.
               | 
               | For a philosophy which started out from the point of view
               | that much of mainstream aid was spent with little
               | thought, it was a bit of an _end of Animal Farm_ moment.
               | 
               | (to their credit, a lot of people who identified as EAs
               | were unhappy. If you drew a Venn diagram of the people
               | that objected, people who sneered at the objections[2]
               | and people who identified as rationalists you might only
               | need two circles though...)
               | 
               | [1]a pretty shaky one considering how easy it is to
               | impress American billionaires with Oxford architecture
               | _without_ going to the expense of operating a nearby
               | mansion as a venue, particularly if you happen to be a
               | charitable movement with strong links to the
               | university... [2]obviously people are only objecting to
               | it for PR purposes because they 're not smart enough to
               | realise that capital appreciates and that venues cost
               | money, and definitely not because they've got a pretty
               | good idea how expensive upkeep on little used medieval
               | venues are and how many alternatives exist if you
               | _really_ care about the cost effectiveness of your
               | retreat, especially to charitable movements affiliated
               | with a university...
        
               | ToValueFunfetti wrote:
               | Ah, fair enough! I had heard the "hosting wealthy donors"
               | as the primary motivation, but it appears to be
               | secondary. My bad.
               | 
               | >As far as I know it's never hosted an impress-the-
               | oligarch fundraiser
               | 
               | As far as I know, they only hosted 3 events there before
               | deciding to sell, so this is low-information.
        
             | gjm11 wrote:
             | > both can be true
             | 
             | Yes! It can be true _both_ that rationalists tend, more
             | than almost any other group, to admit and try to take
             | account of their uncertainty about things they say _and_
             | that it 's fun to dunk on them for being arrogant and
             | always assuming they're 100% right!
        
             | salynchnew wrote:
             | > caring a lot about shrimp welfare (no one else does).
             | 
             | Ah. I guess they are working out ecology through first
             | principles, I guess?
             | 
             | I feel like a lot of the criticism of EA and rationalism
             | does boil down to some kind of general criticism of naivete
             | and entitlement, which... is probably true when applied to
             | lots of people, regardless of whether they espouse these
             | ideas or not.
             | 
             | It's also easier to criticize obviously doomed/misguided
             | efforts at making the world a better place than to think
             | deeply about how many of the pressing modern day problems
             | (environmental issues, extinction, human suffering, etc.)
             | also seem to be completely intractable, when analyzed in
             | terms of the average individual's ability to take action. I
             | think some criticism of EA or rationalism is also a
             | reaction to a creeping unspoken consensus that "things are
             | only going to get worse" in the future.
        
               | freejazz wrote:
               | >I think some criticism of EA or rationalism is also a
               | reaction to a creeping unspoken consensus that "things
               | are only going to get worse" in the future.
               | 
               | I think it's that combined with the EA approach to it
               | which is: let's focus on space flight and shrimp welfare.
               | Not sure which side is more in denial about the impending
               | future?
               | 
               | I have no belief any particular individual can do
               | anything about shrimp welfare more than they can about
               | the intractable problems we do face.
        
           | Certhas wrote:
           | I think this is a valid point. But to some degree both can be
           | true. I often felt when reading some of these type of texts:
           | Wait a second, there is a wealth of thinking on these topics
           | out there; You are not at all situating all your elaborate
           | thinking in a broader context. And there absolutely is
           | willingness to be challenged, and (maybe less so) a
           | willingness to be wrong. But there also is an arrogance that
           | "we are the ones thinking about this rationally, and we will
           | figure this out". As if people hadn't been thinking and
           | discussing and (verbally and literally) fighting over all
           | sorts of adjacent and similar topics in philosophy and
           | sociology and anthropology and ... clubs and seminars
           | forever. And importantly maybe there also isn't as much taste
           | for understanding the limits of vigorous discussion and
           | rational deduction. Adorno and Horkheimer posit a dialectic
           | of rationality and enlightenment, Habermas tries to rebuild
           | rational discourse by analyzing its preconditions. Yet for
           | all the vigorous intellectualism of the rationalists, none of
           | that ever seems to feature even in passing (maybe I have
           | simply missed it...).
           | 
           | And I have definitely encountered "if you just listen to me
           | properly you will understand that I am right, because I have
           | derived my conclusions rationally" in in person interactions.
           | 
           | On the balance I'd rather have some arrogance and willingness
           | to be debated and be wrong, over a timid need to defer to
           | centuries of established thought though. The people I've met
           | in person I've always been happy to hang out with and talk
           | to.
        
             | mitthrowaway2 wrote:
             | That's a fair point. Speaking only for myself, I think I
             | fail to understand why it's important to situate
             | philosophical discussions in the context of all the
             | previous philosophers who have expressed related ideas,
             | rather than simply discussing the ideas in isolation.
             | 
             | I remember as a child coming to the same "if reality is a
             | deception, at least I must exist to be deceived" conclusion
             | that Descartes did, well before I had heard of Descartes.
             | (I don't think this makes me special, it's just a natural
             | conclusion anyone will reach if they ponder the subject). I
             | think it's harmless for me to discuss that idea in public
             | without someone saying "you need to read Descartes before
             | you can talk about this".
             | 
             | I also find my personal ethics are stronly aligned with
             | what Kant espoused. But most people I talk to are not
             | academic philosophers and have not read Kant, so when I
             | want to explain my morals, I am better off explaining the
             | ideas themselves than talking about Kant, which would be a
             | distraction anyway because I didn't _learn them_ from Kant,
             | we just arrived at the same conclusions. If I 'm talking
             | with a philosopher I can just say "I'm a Kantian" as
             | shorthand, but that's really just jargon for people who
             | already know what I'm talking about.
             | 
             | I also think that while it would be unusual for someone to
             | (for example) write a guide to understanding relativity
             | without once mentioning Einstein, it also wouldn't be a
             | fundamental flaw.
             | 
             | (But I agree there's no certainly excuse for someone
             | asserting that they're right because they're rational!)
        
               | jerf wrote:
               | It may be easier to imagine someone trying to derive
               | mathematics all by themselves, since it's less abstract.
               | It's not that they won't come up with anything, it's that
               | everything that even a genius can come up with in their
               | lifetime will be something that the whole of humanity has
               | long since come up with, chewed over, simplified, had a
               | rebellion against, had a counter-rebellion against the
               | rebellion, and ultimately packaged it up in a highly
               | efficient manner into a textbook with cross-references to
               | all sorts of angles on it and dozens of elaborations. You
               | can't possible get through all this stuff all on your
               | own.
               | 
               | The problem is less clear in philosophy than mathematics,
               | but it's still there. It's really easy on your own terms
               | to come up with some idea that the collective
               | intelligence has already revealed to be fatally flawed in
               | some undeniable manner, or at the very least, has very
               | powerful arguments against it that an individual may
               | never consider. The ideas that have survived decades,
               | centuries, and even millenia against the collective
               | weight of humanity assaulting them are going to have a
               | certain character that "something someone came up with
               | last week" will lack.
               | 
               | (That said I am quite heterodox in one way, which is that
               | I'm _not_ a big believer in reading primary sources, at
               | least routinely. Personally I think that a lot of the
               | primary sources noticeably lack the refinement and polish
               | added as humanity chews it over and processes it and I
               | prefer mostly pulling from the result of the process, and
               | not from the one person who happened to introduce a
               | particular idea. Such a source may be interesting for
               | other reasons, but not in my opinion for philosophy.)
        
               | lazyasciiart wrote:
               | Odds are good that the millions of people who have also
               | read and considered these ideas have added to what you
               | came up with at 6. Odds are also high that people who
               | have any interest in the topic will probably learn more
               | by reading Descartes and Kant and the vast range of well
               | written educational materials explaining their thoughts
               | at every level. So if you find yourself telling people
               | about these ideas frequently enough to have opinions on
               | how they respond, you are doing both yourself and them a
               | disservice by not bothering to learn how the ideas have
               | already been criticized and extended.
        
               | voidhorse wrote:
               | Here's a very simple explanation as to why it's helpful
               | from a "first principles" style analogy.
               | 
               | Suppose a foot race. Choose two runners of equal aptitude
               | and finite existence. Start one at mile 1 and one at mile
               | 100. Who do you think will get farther?
               | 
               | Not to mention, engaging in human community and discourse
               | is a big part of what it means to be human. Knowledge
               | isn't personal or isolated, we build it together. The
               | "first principles people" understand this to the extent
               | that they have even built their own community of like
               | minded explorers, problem is, a big part of this bond is
               | their choice to be willfully ignorant of large swaths of
               | human intellectual development. Not only is this stupid,
               | it also is a great disservice to your forebears, who
               | worked just as hard to come to their conclusions and who
               | have been building up the edifice of science bit by bit.
               | It's completely antithetical to the spirit of scientific
               | endeavor.
        
               | Certhas wrote:
               | It really depends on why you are having a philosophical
               | discussion. If you are talking among friends, or just
               | because you want to throw interesting ideas around, sure!
               | Be free, have fun.
               | 
               | I come from a physics background. We used to (and still)
               | have a ton of physicists who decide to dable in a new
               | field, secure in their knowledge that they are smarter
               | than the people doing it, and that anything worthwhile
               | that has already been thought of they can just rederive
               | ad hoc when needed (economists are the only other group
               | that seems to have this tendency...) [1]. It turned out
               | every time that the people who had spent decades working
               | on, studying, discussing and debating the field in
               | question had actually figured important shit out along
               | the way. They might not have come with the mathematical
               | toolbox that physicists had, and outside perspectives
               | that challenge established thinking to prove itself again
               | can be valuable, but when your goal is to actually
               | understand what's happening in the real world, you can't
               | ignore what's been done.
               | 
               | [1] There even is an xkcd about this:
               | 
               | https://xkcd.com/793/
        
               | lukev wrote:
               | Did you discover it from first principles by yourself
               | because it's a natural conclusion anyone would reach if
               | they ponder the subject?
               | 
               | Or because western culture reflects this theme
               | continuously through all the culture and media you've
               | immersed in since you were a child?
               | 
               | Also the idea is definitely not new to Descartes, you can
               | find echoes of it going back to Plato, so your idea isn't
               | wrong per se. But I think it underrates the effect to
               | which our philosophical preconceptions are culturally
               | constructed.
        
             | voidhorse wrote:
             | You're spot on here, and I think this is probably also why
             | they appeal to programmers and people in software.
             | 
             | I find a lot of people in software have an _insufferable_
             | tendency to simply ignore entire bodies of prior art, prior
             | research, etc. outside of _maybe_ computer science (and
             | even that can be rare), and yet they _act_ as though they
             | are the most studied participants in the subject, proudly
             | proclaiming their  "genius insights" that are essentially
             | restatements of basic facts in any given field that they
             | would have learned if they just bothered to, you know,
             | actually _do research_ and put aside their egos for half a
             | second to wonder if maybe the eons of human activity prior
             | to their precious existence might have led to some decent
             | knowledge.
        
               | nyeah wrote:
               | Yeah, though I think you may be exaggerating how often
               | the "genius insights" rise to the level of correct
               | restatements of basic facts. That happens, but it's not
               | the rule.
        
               | astrange wrote:
               | This is old engineer / old physicist syndrome.
               | 
               | https://www.smbc-comics.com/?id=2556
        
             | Aurornis wrote:
             | > But there also is an arrogance that "we are the ones
             | thinking about this rationally, and we will figure this
             | out". As if people hadn't been thinking and discussing and
             | (verbally and literally) fighting over all sorts of
             | adjacent and similar topics in philosophy and sociology and
             | anthropology and ... clubs and seminars forever
             | 
             | This is a feature, not a bug, for writers who hold an
             | opinion on something and want to rationalize it.
             | 
             | So many of the rationalist posts I've read through the
             | years come from someone who has an opinion or gut feeling
             | about something, but they want it to be seen as something
             | more rigorous. The "first principles" writing style is a
             | license to throw out the existing research on the topic,
             | including contradictory evidence, and construct an all new
             | scaffold around their opinion that makes it look more
             | valid.
             | 
             | I use the "SlimeTimeMoldTime - A Chemical Hunger" blog
             | series as an example because it was so widely shared and
             | endorsed in the rationalist community:
             | https://slimemoldtimemold.com/2021/07/07/a-chemical-
             | hunger-p... It even received a financial grant from Scott
             | Alexander of Astral Codex Ten
             | 
             | Actual experts were discrediting the series from the first
             | blog post and explaining all of the author's errors, but
             | the community soldiered on with it anyway, eventually
             | making the belief that lithium in the water supply was
             | causing the obesity epidemic into a meme within the
             | rationalist community. There's no evidence supporting this
             | and countless take-downs of how the author misinterpreted
             | or cherry-picked data, but because it was written with the
             | rationalist style and given the implicit blessing of a
             | rationalist figurehead it was adopted as ground truth by
             | many for years. People have been waking up to issues with
             | the series for a while now, but at the time it was
             | remarkable how quickly the idea spread as if it was a true,
             | novel discovery.
        
           | Aurornis wrote:
           | I grew up with some friends who were deep into the early
           | roots of online rationalism, even slightly before LessWrong
           | came online. I've been around long enough to recognize the
           | rhetorical devices used in rationalist writings:
           | 
           | > Aren't these the people who started the trend of writing
           | things like "epistemic status: mostly speculation" on their
           | blog posts? And writing essays about the dangers of
           | overconfidence? And measuring how often their predictions
           | turn out wrong? And maintaining webpages titled "list of
           | things I was wrong about"?
           | 
           | There's a lot of in-group signaling in rationalist circles
           | like the "epistemic status" taglines, posting predictions,
           | and putting your humility on show.
           | 
           | This has come full-circle, though, and now rationalist
           | writings are generally pre-baked with hedging, both-sides
           | takes, escape hatches, and other writing tricks that make it
           | easier to claim they weren't _entirely_ wrong in the future.
           | 
           | A perfect exaple is the recent "AI 2027" doomsday scenario
           | that predicts a rapid escalation of AI superpowers followed
           | by disaster in only a couple years: https://ai-2027.com/
           | 
           | If you read the backstory and supporting blog posts from the
           | authors they are filled to the brim with hedges and escape
           | hatches. Scott Alexander wrote that it was something like
           | "the 80th percentile of their fast scenario", which means
           | when it fails to come true he can simple say it wasn't
           | actually his median prediction anyway and that they were
           | writing about the _fast_ scenario. I can already predict that
           | the  "We were wrong" article will be more about what they got
           | right with a heavy emphasis on the fact that it wasn't their
           | real median prediction anyway.
           | 
           | I think this group relies heavily on the faux-humility and
           | hedging because they've recognized how powerful it is to get
           | people to trust them. Even the comment above is implying that
           | because they say and do these things, they must be immune
           | from the criticism delivered above. That's exactly why they
           | wrap their posts in these signals, before going on to do
           | whatever they were going to do anyway.
        
             | Veedrac wrote:
             | If putting up evidence about how people were wrong in their
             | predictions, I suggest actually pointing at predictions
             | that were wrong, rather than on recent predictions about
             | the future that that you disagree over how they will
             | resolve. If putting up evidence about how people make
             | excuses for failing predictions, I suggest actually showing
             | them do so, rather than projecting that they will do so and
             | blaming them for your projection.
        
               | Aurornis wrote:
               | It's been a while since I've engaged in rationalist
               | debates, so I forgot about the slightly condescending,
               | lecturing tone that comes out when you disagree with
               | rationalist figureheads. :) You could simply ask "Can you
               | provide examples" instead of the "If you ____ then I
               | suggest ____" form.
               | 
               | My point wasn't to nit-pick individual predictions, it
               | was a general explanation of how the game is played.
               | 
               | Since Scott Alexander comes up a lot, a few randomly
               | selected predictions that didn't come true:
               | 
               | - He predicted at least $250 million in damages from
               | Black Lives Matter protests.
               | 
               | - He predicted Andrew Yang would win the 2021 NYC mayoral
               | race with 80% certainty (he came in 4th place)
               | 
               | - He gave a 70% chance to Vitamin D being generally
               | recognized as a good COVID treatment
               | 
               | This is just random samples from the first blog post that
               | popped in Google:
               | https://www.astralcodexten.com/p/grading-
               | my-2021-predictions
               | 
               | It's also noteworthy to read that a lot of his
               | predictions are about his personal life, his own blogging
               | actions, or [redacted] things. These all get mixed in
               | with a small number of geopolitical, economic, and
               | medical predictions with the net result of bringing his
               | overall accuracy up.
        
             | mitthrowaway2 wrote:
             | Yes, I do think that these hedging statements make them
             | immune from the specific criticism that I quoted.
             | 
             | If you want to say their humility is not genuine, fine. I'm
             | not sure I agree with it, but you are entitled to that
             | view. But to simultaneously be attacking the same community
             | for _not ever_ showing a sense of maybe being wrong or
             | uncertain, and also for expressing it _so often it 's
             | become an in-group signal_, is just too much cognitive
             | dissonance.
        
               | Aurornis wrote:
               | > es, I do think that these hedging statements make them
               | immune from the specific criticism that I quoted.
               | 
               | That's my point: Their rhetorical style is interpreted by
               | the in-group as a sort of weird infallibility. Like
               | they've covered both sides and therefore the work is
               | technically correct in all cases. Once they go through
               | the hedging dance, they can put forth the opinion-based
               | point they're trying to make in a very persuasive way,
               | falling back to the hedging in the future if it turns out
               | to be completely wrong.
               | 
               | The writing style looks different depending on where you
               | stand: Reading it in the forward direction makes it feel
               | like the main point is very likely. Reading it in the
               | backward direction you notice the hedging and decide they
               | were also correct. Yet at the time, the rationalist
               | community attaches themselves to the position being
               | pushed.
               | 
               | > But to simultaneously be attacking the same community
               | for not ever showing a sense of maybe being wrong or
               | uncertain, and also for expressing it so often it's
               | become an in-group signal, is just too much cognitive
               | dissonance.
               | 
               | That's a strawman argument. At no point did I "attack the
               | community for not ever showing a sense of maybe being
               | wrong or uncertain".
        
               | mitthrowaway2 wrote:
               | > That's a strawman argument. At no point did I "attack
               | the community for not ever showing a sense of maybe being
               | wrong or uncertain".
               | 
               | Ok, let's scroll up the thread. When I refer to "the
               | specific criticism that I quoted", and when you say
               | "implying that because they say and do these things, they
               | must be immune from the criticism delivered above": what
               | do you think was the "criticism delivered above"? Because
               | I thought we were talking about contrarian1234's claim to
               | exactly this "strawman", and you so far have not appeared
               | to not agree with me that this criticism was invalid.
        
           | freejazz wrote:
           | >Are you sure you're not painting this group with an overly-
           | broad brush?
           | 
           | "Aren't these the people who"...
           | 
           | > And writing essays about the dangers of overconfidence? And
           | measuring how often their predictions turn out wrong? And
           | maintaining webpages titled "list of things I was wrong
           | about"?
           | 
           | What's the value of that if it doesn't appear to be
           | reasonably put to their own ideas. What you described
           | otherwise is just another form of the exact kind of self-
           | congratulation often (reasonably, IMO) lobbed at these
           | "people"
        
         | BurningFrog wrote:
         | An unfortunate fact is that people who are very annoying can
         | also be right...
        
         | aredox wrote:
         | They are the perfectly rational people who await the arrival of
         | a robot god...
         | 
         | Note they are a mostly American phenomenon. To me, that's a
         | consequence of the oppressive culture of "cliques" in American
         | schools. I would even suppose it is a second-order effect of
         | the deep racism of American culture: the first level is to
         | belong to the "whites" or the "blacks", but when it is not
         | enough, you have to create your own subgroup with its identity,
         | pride, conferences... To make yourself even more betterer than
         | the others.
        
           | nradov wrote:
           | There is certainly some racism in parts of American culture.
           | We have a lot of work to do to fix that. But on a relative
           | basis it's also one of the _least_ racist cultures in the
           | world.
        
         | Seattle3503 wrote:
         | I think the rationalists have failed to humanize themselves.
         | They let their thinkpieces define them entirely, but a
         | studiously considered think piece is a narrow view into a
         | person. If rationalists were more publicly vulnerable, people
         | might find them more publicly relatable.
        
           | mitthrowaway2 wrote:
           | Scott Aaronson is probably the most publicly-vulnerable
           | academic I've ever found, at least outside of authors who
           | write memoirs about childhood trauma. I think a lot of other
           | prominent rationalists also put a lot of vulnerability out
           | there.
        
             | Seattle3503 wrote:
             | He didn't take the rationalist label until today. Him doing
             | so might help their image.
        
               | mitthrowaway2 wrote:
               | Right, but him doing so is the very context of this
               | discussion, which is why I mentioned him in particular.
               | Scott Alexander is more well-known as a rationalist and
               | also (IMO) displays a lot of vulnerability in his
               | writing.
        
         | iNic wrote:
         | Every community has a long list of etiquettes, rules and shared
         | knowledge that is assumed and generally not spelled out
         | explicitly. One of the core assumptions of the rationalist
         | community is that every statement has uncertainty unless you
         | explicitly spell out that you are certain! This came about as a
         | matter of practicality, as it would be inconvenient to preempt
         | every other sentence with "I'm uncertain about this". Many
         | discussions you will see have the flavor of "strong opinions,
         | lightly held" for this reason.
        
         | kypro wrote:
         | There are few things I hold strong opinions on, but where I do
         | if they're also out of step with what most people think I am
         | very vocal about them.
         | 
         | I see this in rationalist spaces too - it doesn't really make
         | sense for people to talk about things that they believe in
         | strongly but that 95%+ of the public also believe in (like the
         | existence of air), or that they don't have a strong opinion on.
         | 
         | I am a very vocal doomer on AI because I predict with high
         | probability it's going to be very bad for humanity and this is
         | an opinion which, although shared by some, is quite
         | controversial and probably only held by 30% of the public.
         | Given the importance of the subject, my confidence, and that
         | fact I feel the vast majority of people are even wrong or are
         | significantly underweighting caetrosphohic risks, I have to be
         | vocal about it.
         | 
         | Do I acknowledge I might be wrong? Sure, but for me the
         | probability is low enough that I'm comfortable making very
         | strong and unqualified statements about what I believe will
         | happen. I suspect others in the rationalist community like
         | Eliezer Yudkowsky think similarly.
        
           | megaman821 wrote:
           | How confident should other people be that random people in
           | conversation or commentors on the internet are at accurately
           | predicting the future? I strongly believe that nearly 100%
           | are wrong in both major and minor ways.
           | 
           | Also, when you say you have a strong belief, does that mean
           | you have emptied you retirement accounts and you are enjoying
           | all you can in the moment until the end comes?
        
             | mitthrowaway2 wrote:
             | I'm not kypro, but what counts as "strong belief" depends a
             | lot on the context.
             | 
             | For example, I won't cross the street without 99.99%
             | confidence that I will survive. I cross streets so many
             | times that a lower threshold like 99% would look like
             | insanely risky dart-into-traffic behaviour.
             | 
             | If an asteroid is heading for earth, then even a 25%
             | probability of apocalyptic collision is enough that I would
             | call it very high, and spend almost all my focus attempting
             | to prevent that outcome. But I wouldn't empty my retirement
             | account for the sake of hedonism because there's still a
             | 75% chance I make it through and need to plan my
             | retirement.
        
         | jandrese wrote:
         | They remind me of the "Effective Altruism" crowd who get
         | completely wound up in these hypothetical logical thought
         | exercises and end up coming to insane conclusions that they
         | feel trapped in because they got there using pure logic. Not
         | realizing that their initial conditions were highly artificial
         | so any conclusion they reach is only of academic value.
         | 
         | There is a term for this. "Getting stuck up your own butt." It
         | wouldn't be so bad except that said people often take on an air
         | of absolute superiority because they used "only logic" and in
         | their head they can not be wrong. Many people end up thinking
         | like this as teenagers or 20 somethings, but most will have
         | someone in their life who smacks them over the head and tells
         | them to stop being so foolish, but if you have enough money and
         | the Internet you can insulate yourself from that kind of
         | oversight.
        
           | OtherShrezzing wrote:
           | They _are_ the effective altruism crowd.
        
           | troyastorino wrote:
           | The overlap between the Effective Altruism community and the
           | Rationalist community is extremely high. They're largely the
           | same people. Effective Altruism gained a lot of early
           | attention on LessWrong, and the pessimistic focus on AI
           | existential risk largely stems from an EA desire to avoid
           | "temporal-discounting" bias. The reasoning is something like:
           | if you accept that future people count just as much as
           | current people, and that the number of future people vastly
           | outweighs everyone alive today (or who has ever lived), then
           | even small probabilities of catastrophic events wiping out
           | humanity yield enormous negative expected value. Therefore,
           | nothing can produce greater positive expected value than
           | preventing existential risks--so working to reduce these
           | risks becomes the highest priority.
           | 
           | People in these communities are generally quite smart, and
           | it's seductive to reason in a purely logical, deductive way.
           | There is real value in thinking rigorously and in making sure
           | you're not beholden to commonly held beliefs. But, like you
           | said, reality is complex, and it's really hard to pick
           | initial premises that capture everything relevant. The insane
           | conclusions they get to could be avoided by re-checking &
           | revising premises, especially when the argument is going in a
           | direction that clashes with history, real-world experience,
           | or basic common sense.
        
             | jdmichal wrote:
             | I'm not familiar with any of these communities. Is there
             | also a general bias towards one side between "the most
             | important thing gets the * _most*_ resources " and "the
             | most important thing gets * _all*_ the resources "? Or, in
             | other words, the most important thing is the only important
             | thing?
             | 
             | IMO it's fine to pick a favorite and devote extra resources
             | to it. But that turns less fine when one also starts
             | working to deprive everything else of any oxygen because
             | it's not your favorite. (And I'm aware that this criticism
             | applies to lots of communities.)
        
               | nearbuy wrote:
               | It's not the case. Effective altruists give to dozens of
               | different causes, such as malaria prevention,
               | environmentalism, animal welfare, and (perhaps most
               | controversially) extinction risk. It can't tell you which
               | root values to care about. It just asks you to consider
               | whether the charity is impactful.
               | 
               | Even if an individual person chooses to direct all their
               | donations to a single cause, there's no way to get
               | everyone to donate to a single cause (nor is EA
               | attempting to). Money gets spread around because people
               | have different values.
               | 
               | It absolutely does take some money away from other
               | causes, but only in the sense that all charities do: if
               | you give a lot to one charity, you may have less money to
               | give to others.
        
               | JoshuaDavid wrote:
               | The general idea is that on the margin (in the economics
               | sense), more resources should go to the most
               | effective+neglected thing, and.the amount of resources I
               | control is approximately zero in a global sense, so I
               | personally should direct all of my personal giving to the
               | highest impact thing.
        
             | dasil003 wrote:
             | Intelligence and rational thought is useful, but like any
             | strategy it has its tradeoffs and limitations. No amount of
             | intelligence can overcome the chaos of long time horizons,
             | especially when we're talking about human civilization.
             | IMHO it's reasonable to pick a long-term problem/risk and
             | focus on solving it. But it's pure hubris to think
             | rationality will give you anything approaching high
             | confidence of what the biggest problems and risks actually
             | are on a 20-50 year time horizon, let alone 200-500 years
             | or longer.
             | 
             | The whole reason we even have time to think this way is
             | because we are at the peak of an industrial civilization
             | that has created a level of abundance that allows a lot of
             | people a lot of time to think. But the whole situation that
             | we live in is not stable at all, "progress" could continue,
             | or we could hit a peak and regress. As much as we can see a
             | lot of long-term trajectories (eg. peak oil, global
             | warming), we really have no idea what will be the triggers
             | and inflection points that change the social fabric in ways
             | that are unforeseeable and quickly invalidate whatever
             | prior assumptions all that deep thinking was resting upon.
             | I mean 50 years ago we thought overpopulation was the
             | biggest risk, and that thinking has completely flipped even
             | without a major trajectory change for industrial
             | civilization in that time.
        
               | lisper wrote:
               | I think one can levy a much more specific critique of
               | rationalism: rationalism is in some sense self-defeating.
               | If you are rational you will necessarily conclude that
               | the fundamental dynamic that drives the (interesting
               | parts of) the universe is Darwinian evolution, which is
               | not rational. It blindly selects for reproductive fitness
               | at the expense of all else. If you are a gene, you can
               | probably produce more offspring in an already-
               | industrialized environment by making brains that lean
               | more towards misogyny and sexual promiscuity than gender
               | equality and intellectual achievement.
               | 
               | The real conflict here is between Darwinism and
               | enlightenment ideals. But I have yet to see any self-
               | styled Rationalists take this seriously.
        
               | mettamage wrote:
               | I always liken this to that we're all asteroids floating
               | in space. There's no free will and everything is
               | determined. We just see the whole thing unfold from one
               | conscious perspective.
               | 
               | Emotionally I don't subscribe to this view. Rationally I
               | do.
               | 
               | My critique for rational people is that they don't seem
               | to fully take experience into account. It's assumptions +
               | rationality + experience/data + whatever strong
               | inclinations one has that seems to be the full picture
               | for me.
        
               | Retric wrote:
               | > no free will
               | 
               | That always seemed like a meaningless argument to me. To
               | an outside observer free will is indistinguishable from a
               | random process over some range of possibilities. You
               | aren't going to randomly go to sleep with your hand in a
               | fire, there's some hard coded biology preventing that
               | choice but that only means human behavior isn't
               | completely random, hardly a groundbreaking discovery.
               | 
               | At the other end we have no issues making an arbitrary
               | decision where there's no way to predict what the better
               | choice is. So what exactly does free will bring to the
               | table that we're missing without it? Some sort of
               | mystical soul, well what if that's also deterministic?
               | Unpredictability is useful in game theory, but computers
               | can get that from a hardware RNG based on quantum
               | processes like radioactive decay, so it doesn't mean
               | much.
               | 
               | Finally, subjectively the answer isn't clear so what
               | difference does it make?
        
               | mettamage wrote:
               | > That always seemed like a meaningless argument to me.
               | 
               | Same as that is not the lived experience. I notice that I
               | care about free choice.
               | 
               | The idea that there's no free will may be a pessimistic
               | outlook to some but to me it's a strictly neutral one. It
               | used to be a bit negative, until I looked more closely
               | that there's a difference between looking at a situation
               | objectively and having a lived experience. When it comes
               | to my inclinations and how I want to live life, lived
               | experience takes precedence.
               | 
               | I don't have my thoughts sharp on it, but I don't think
               | the concept even exists philosophically, but I think
               | that's also what you're getting at. It's a conceptual
               | remnant from the past.
        
               | lisper wrote:
               | If you get down to the quantum level there is no such
               | thing as objective reality. Our perception that the world
               | is made of classical objects that actually exist at
               | particular places at particular times and have continuity
               | of identity is an illusion. But it's a really compelling
               | illusion, and you won't go far wrong treating it as if it
               | were the truth in 99% of real-world situations. Likewise,
               | free will is an illusion, nothing more than a reflection
               | of our ignorance of how our brains work. But it is a
               | really compelling illusion, and you won't go far wrong
               | treating it as if it were the truth, at least some of the
               | time.
        
               | mettamage wrote:
               | > If you get down to the quantum level there is no such
               | thing as objective reality.
               | 
               | What do you mean by that? It still exists doesn't it?
               | Albeit in a probabilistic sense that becomes non-
               | probabilistic at larger scales.
               | 
               | I don't know much about quantum other than the high level
               | conceptual stuff.
        
               | stuartjohnson12 wrote:
               | To the contrary, here's a series of essays on the subject
               | of evolutionary game theory, the incentives created by
               | competition, and its consequences for human wellbeing:
               | 
               | https://www.lesswrong.com/s/kNANcHLNtJt5qeuSS
               | 
               | "Moloch hasn't won" is a lengthy critique of the argument
               | you are making here.
        
               | lisper wrote:
               | That doesn't seem to be on point to me. I'm not talking
               | about being "caught in bad equilibria". My assertion is
               | that rationalism itself _is not stable_ , that the
               | (apparent) triumph of rationalism since the Enlightenment
               | was a transient, not an equilibrium. And one of the
               | reasons it was a transient is that self-styled
               | rationalists believed (and apparently still believe) that
               | rationalism will inevitably triumph because it is
               | rational, because it is in more intimate contact with
               | reality than religion and superstition. But this is wrong
               | because it overlooks the fact that what triumphs in the
               | long run is simply reproductive fitness. Being in contact
               | with reality can be actively harmful to reproductive
               | fitness if it leads you to, say, decide not to have kids
               | because you are pessimistic about the future.
        
               | munksbeer wrote:
               | I hesitate to nitpick, but Darwinism (as far as I know)
               | is not really the term to use because Darwin's theory was
               | limited to life on earth. Only later was the concept
               | generalised into "natural selection" or "survival of the
               | fittest".
               | 
               | I'm not sure I entirely understand what you're arguing
               | here, but I absolutely do agree that the most powerful
               | force in the universe is natural selection.
        
               | lisper wrote:
               | The term "Darwinian evolution" applies to any process
               | that comprises iterated replication with random mutation
               | followed selection for some quality metric. Darwin
               | himself would not have defined it that way, but he still
               | deserves the credit for being the first to recognize and
               | document the power of this simple process.
        
             | AnthonyMouse wrote:
             | > Therefore, nothing can produce greater positive expected
             | value than preventing existential risks--so working to
             | reduce these risks becomes the highest priority.
             | 
             | Incidentally, the flaw in this theory is in thinking you
             | understand what all the existential risks _are_.
             | 
             | Suppose you clock "malicious AI" as a huge risk and then
             | hamper AI, but it turns out the bigger risk is not doing
             | space exploration, which AI would have accelerated, because
             | something catastrophic yet already-inevitable is going to
             | happen to the Earth in a few hundred years and if we're not
             | sustainably multi-planetary by then it's all over.
             | 
             | The thing evolution teaches us is that diversity is a group
             | survival trait. Anybody insisting "nobody anywhere should
             | do X" is more likely to cause an ELE than prevent one.
        
               | TeMPOraL wrote:
               | > _Incidentally, the flaw in this theory is in thinking
               | you understand what all the existential risks are._
               | 
               | Rationalist community understands that very well. They
               | even know how to put bounds on the unknowns and their own
               | lack of information.
               | 
               | > _The thing evolution teaches us is that diversity is a
               | group survival trait. Anybody insisting "nobody anywhere
               | should do X" is more likely to cause an ELE than prevent
               | one._
               | 
               | Right. Good thing they'd agree with you 100% on this.
        
               | AnthonyMouse wrote:
               | The nature of unknowns is that you don't know them.
               | 
               | What's the probability of AI singularity? It has never
               | happened before so you have no priors and any number you
               | assign will be pure speculation.
        
               | TeMPOraL wrote:
               | Same is true about anything you're trying to forecast, by
               | definition of it being in the future. And yet people have
               | figured out how to make predictions more narrow than
               | shrugging.
        
               | freejazz wrote:
               | "And the general absolutist tone of the community. The
               | people involved all seem very... Full of themselves ?"
               | 
               | >And yet people have figured out how to make predictions
               | more narrow than shrugging
               | 
               | And?
        
               | senko wrote:
               | >> It has never happened before
               | 
               | > Same is true about anything you're trying to forecast,
               | by definition of it being in the future
               | 
               | There _might_ be some flaws in this line of reasoning...
        
               | jandrese wrote:
               | "It is difficult to make predictions, especially about
               | the future."
               | 
               | Most of the time we make predictions based on how similar
               | events happened in the past. For completely novel
               | situations it's close to impossible to make a prediction
               | and reckless to base policy on such a prediction.
        
               | ffsm8 wrote:
               | That's strictly true, but I feel like you're
               | misunderstanding something. Most people aren't actually
               | doing anything truly novel, hence very few people ever
               | actually have to even attempt to predict things in this
               | way.
               | 
               | But it was necessary at the beginning of flight and the
               | flight to the moon would've never been possible either
               | without a few talented people being able to make
               | predictions about scenarios they knew little about.
               | 
               | There are just way too many people around nowadays, which
               | is why most of us never get confronted with such novel
               | topics and consequently we don't know how to reason about
               | it
        
               | astrange wrote:
               | > They even know how to put bounds on the unknowns and
               | their own lack of information.
               | 
               | No they don't. They think they can do this because
               | they've accidentally reinvented the philosophy "logical
               | positivism", which philosophers gave up on because it
               | doesn't work. (This is similar to how they accidentally
               | reinvented reconstructing arguments and called it
               | "steelmanning".)
               | 
               | https://metarationality.com/probability-limitations
        
               | bayarearefugee wrote:
               | That's only one flaw in the theory.
               | 
               | There are others, such as the unproven, narcissistic and
               | frankly unlikely-to-be-true assumption that humanity
               | continuing to exist is a net positive in the long run.
        
             | im3w1l wrote:
             | I think most everyone can agree with this: Being 100%
             | rigorous and rational, reasoning from first principles and
             | completely discarding received wisdom is a great trait in a
             | philosopher but a terrible trait in a policymaker. Because
             | for the former, exploring ideas for the benefit of future
             | generations is more important than whether they ultimately
             | reach the right conclusion or not.
        
               | Johanx64 wrote:
               | > Being 100% rigorous and rational, reasoning from first
               | principles
               | 
               | It really annoys me when people say that those religious
               | cultists do that.
               | 
               | They derive their bullshit from faulty, poorly thought
               | out premises.
               | 
               | If you fuck up the very firsts calculations of the
               | algorithm, it doesn't matter how rigorous all the
               | subsequent steps are. There results are going to be all
               | wrong.
        
             | nradov wrote:
             | In what sense are people in those communities "quite
             | smart"? Stupid is as stupid does. There are plenty of
             | people who get good grades and score highly on standardized
             | tests, but are in fact nothing but pontificating blowhards
             | and useless wankers.
        
               | astrange wrote:
               | They're members of a religion which says that if you do
               | math in your head the right way you'll be correct about
               | everything, and so they think they're correct about
               | everything.
               | 
               | They also secondarily believe everyone has an IQ which is
               | their DBZ power level; they believe anything they see
               | that has math in it, and IQ is math, so they believe
               | anything they see about IQ. So if you avoid trying to
               | find out your own IQ you can just believe it's really
               | high and then you're good.
               | 
               | Unfortunately this lead them to the conclusion that
               | computers have more IQ than them and so would
               | automatically win any intellectual DBZ laser beam fight
               | against them / enslave them / take over the world.
        
               | jeffhwang wrote:
               | If only I could +1 this more than once! I have learned
               | valuable things occasionally from people in the
               | rationalist community but this overall lack of humility
               | --and strangely blinkered view of humanities and
               | important topics like say history of science relevant to
               | "STEM"--ultimately turned me off to the movement as a
               | whole. And I love science and math! It just shouldn't
               | belong to people with this (imo) childish model of
               | people, IQ, etc.
        
             | bena wrote:
             | Technically "long-termism" _should_ lead them straight to
             | nihilism. Because, eventually, everything will end. One way
             | or another. The odds are just 1. At some point, there are
             | no more future humans. The number of humans are zero. Also,
             | due to the nature of the infinite, any finite thing is
             | essentially a rounding error and not worth concerning
             | oneself with.
             | 
             | I get the feeling these people often want to _seem_ smarter
             | than they are, regardless of how smart they are. And they
             | want to get money to ostensibly  "consider these issues",
             | but really they want money for nothing.
             | 
             | If they wanted to do right by the future masses, they
             | should be looking to the things that are affecting us right
             | now. But they treat _those_ issues as if they 'll work out
             | in the wash.
        
               | aspenmayer wrote:
               | > Technically "long-termism" should lead them straight to
               | nihilism. Because, eventually, everything will end. One
               | way or another. The odds are just 1. At some point, there
               | are no more future humans. The number of humans are zero.
               | Also, due to the nature of the infinite, any finite thing
               | is essentially a rounding error and not worth concerning
               | oneself with.
               | 
               | The current sums invested and donated in altruist causes
               | are rounding errors themselves compared to GDPs of
               | countries, so the revealed preferences of those investing
               | and donating to altruist causes is to care about the
               | future and the present also.
               | 
               | Are you saying that they should give a greater preference
               | to help those who already exist rather than those who may
               | exist in the future?
               | 
               | I see a lot of Peter Singer's ideas in modern "effective"
               | altruism, but I get the sense from your comment that you
               | don't think that they have good reasons for doing what
               | they do, or that their reason leads them to support well-
               | meaning but ineffective solutions. I am trying to
               | understand your position without misrepresenting your
               | point or goals. Are you naysaying or do you have an
               | alternative?
               | 
               | https://en.wikipedia.org/wiki/Peter_Singer
        
             | dmurray wrote:
             | The other weird direction it leads is space travel.
             | 
             | If you assume we eventually figure out long distance space
             | travel and humanity spreads across the galaxy, there could
             | in the future be quadrillions of people, growing at some
             | kind of exponential rate. So accelerating the space race by
             | even an hour is equivalent to bringing billions of new
             | souls into existence.
        
               | munksbeer wrote:
               | I don't see how bringing new souls (whatever those are)
               | into existence should naturally qualify as a good thing?
               | 
               | Perhaps you're arguing as an illustration of the way this
               | group of people think, in which case I understand your
               | point.
        
             | Johanx64 wrote:
             | >People in these communities are generally quite smart, and
             | it's seductive to reason in a _purely logical, deductive
             | way_. There is real value in thinking rigorously and in
             | making sure you're not beholden to commonly held beliefs.
             | But, like you said, reality is complex, and it's really
             | hard to pick initial premises that capture everything
             | relevant. The insane conclusions they get to could be
             | avoided by re-checking  & revising premises, especially
             | when the argument is going in a direction that clashes with
             | history, real-world experience, or basic common sense.
             | 
             | They don't even do this.
             | 
             | If you're reasoning in purely logical and deductive way -
             | it's blatantly obvious that living beings experience way
             | more pain and suffering, than pleasure and joy. If you do
             | the math, humanity getting wiped out in effect is the best
             | thing that could happen.
             | 
             | Which is why accelerationism ignoring all the AGI risks is
             | correct strategy presuming the AGI will either wipe us out
             | (good outcome) or provide technologies that improve the
             | human condition and reduce suffering (good outcome).
             | 
             | Logical and deductive reasoning based on completely
             | baseless and obviously incorrect premises is flat out
             | idiotic.
             | 
             | You can't deprive non-existent people out of anything.
             | 
             | And if you do, I hope you're ready for _purely logical,
             | deductive_ follow up - every droplet of sperm is sacred and
             | should be used to impregnate.
        
           | TimTheTinker wrote:
           | > their initial conditions were highly artificial
           | 
           | There has to be (or ought to be) a name for this kind of
           | epistemological fallacy, where in pursuit of truth, the
           | pursuit of logical sophistication and soundness between
           | starting assumptions (or first principles) and conclusions
           | becomes functionally way more important than carefully
           | evaluating and thoughtfully choosing the right starting
           | assumptions (and being willing to change them when they are
           | found to be inconsistent with sound observation and
           | interpretation).
        
             | nyeah wrote:
             | Yes, there's a name for it. They're dumbasses.
             | 
             | "[...] Clevinger was one of those people with lots of
             | intelligence and no brains, and everyone knew it except
             | those who soon found it out. In short, he was a dope." -
             | Joseph Heller, Catch-22
             | https://www.goodreads.com/quotes/7522733-in-short-
             | clevinger-...
        
               | TeMPOraL wrote:
               | You mean people projecting this problem onto them with
               | zero evidence beyond their own preconceptions and an
               | occasional stereotype they read online?
               | 
               | HN should be better than this.
        
               | nyeah wrote:
               | Let's read the thread: "There has to be (or ought to be)
               | a name for this kind of epistemological fallacy, where in
               | pursuit of truth, the pursuit of logical sophistication
               | and soundness between starting assumptions (or first
               | principles) and conclusions becomes functionally way more
               | important than carefully evaluating and thoughtfully
               | choosing the right starting assumptions (and being
               | willing to change them when they are found to be
               | inconsistent with sound observation and interpretation)."
               | 
               | Can people suffer from that impairment? Is that possible?
               | If not, please explain how wrong assumptions can be
               | eliminated without actively looking for them. If the
               | impairment is real, what would you call its victims? Pick
               | your own terminology.
        
               | TimTheTinker wrote:
               | Name calling may be helpful in some contexts, but I doubt
               | it's persuasive to the people in question.
        
               | TimTheTinker wrote:
               | That may work for literature, but I was hoping for
               | something more precise.
        
           | UncleOxidant wrote:
           | > They remind me of the "Effective Altruism" crowd
           | 
           | Isn't there a lot of overlap between the two groups?
           | 
           | I recently read a great book that examines these various
           | groups and their commonality: _More Everything Forever: AI
           | Overlords, Space Empires, and Silicon Valley 's Crusade to
           | Control the Fate of Humanity_ by Adam Becker. Highly
           | recommended.
        
           | HPsquared wrote:
           | People who confuse the map for the territory.
        
           | noname120 wrote:
           | > They remind me of the "Effective Altruism" crowd who get
           | completely wound up in these hypothetical logical thought
           | exercises and end up coming to insane conclusions that they
           | feel trapped in because they got there using pure logic. Not
           | realizing that their initial conditions were highly
           | artificial so any conclusion they reach is only of academic
           | value.
           | 
           | Do you have examples of that? I have a different perception,
           | most of the EAs I've met are very grounded and sharp.
           | 
           | For example the most recent issue of their newsletter:
           | https://us8.campaign-
           | archive.com/?e=7023019c13&u=52b028e7f79...
           | 
           | I'm not sure where there are any "hypothetical logical
           | thought exercises" that "end up coming to insane conclusions"
           | in there.
           | 
           | For the first part where you say "not realizing that their
           | initial conditions were highly artificial so any conclusion
           | they reach is only of academic value" this is quite the
           | opposite of my experience with them. They are very receptive
           | to criticism and reconsider their point of view in reaction
           | to that.
           | 
           | They are generally well-aware of the limits of data-driven
           | initiatives and the dangers of indulging into purely abstract
           | thinking that can lead to conclusions that indeed don't make
           | sense.
        
             | jhbadger wrote:
             | As Adam Becker shows in his book, EAs started out being
             | reasonable "give to charity as much as you can, and
             | research which charities do the most good" but have gotten
             | into absurdities like "it is more important to fund rockets
             | than help starving people or prevent malaria because maybe
             | an asteroid will hit the Earth, killing everyone, starving
             | or not".
        
               | LargeWu wrote:
               | It's also not a very big leap from "My purpose is to do
               | whatever is the greatest good" to "It doesn't matter if I
               | hurt people as long as the overall net result is good (by
               | some arbitrary standard)"
        
               | oh_my_goodness wrote:
               | I think this is the key comment so far.
        
               | sfblah wrote:
               | How do they escape the reality that the Earth will one
               | day be destroyed, and that it's almost certainly
               | impossible to ever colonize another planetary system?
               | Just suicide out?
        
               | wat10000 wrote:
               | If you value maximizing the number of human lives that
               | are lived, then even "almost certainly impossible" is
               | enough to justify focusing a huge amount of effort on
               | that. Maybe interstellar colonization is a one in a
               | million shot, but it would multiply the number of human
               | lives by billions or trillions or more.
        
               | Veedrac wrote:
               | It seems very odd to criticize the group that most
               | reliably and effectively funds global health and malaria
               | prevention for this.
               | 
               | What is your alternative? What's your framework that
               | makes you contribute to malaria prevention more or more
               | effectively than EAs do? Or is the claim instead that
               | people should shut down conversation within EA that
               | strays from the EA mode?
        
               | jhbadger wrote:
               | The simple answer is you don't need a "framework" --
               | plain empathy for the less fortunate is good enough. But
               | if the EA's actually want to do something about malaria
               | (although the Gates Foundation does much, much more in
               | that regard than the Centre for Effective Altruism), more
               | power to them. But as Becker notes from his visits to the
               | Centre, things like malaria and malnutrition are not the
               | primary focus of the centre.
        
             | jandrese wrote:
             | One example is Newcomb's problem. It presupposes a
             | ridiculous scenario where a godlike being acts irrationally
             | and then people try to base their life around "winning" the
             | game that will never ever happen to them.
        
             | notahacker wrote:
             | The confluence of Bay Area rationalism and academic
             | philosophy means a _lot_ of other EA space is given to
             | discussing hypotheticals in longwinded forum posts, blogs
             | and papers. Some of those are well-trod utilitarian
             | debates, others take it towards uniquely EA arguments like
             | asserting that given that there could be as many as 10^31
             | future humans, essentially anything which claims to reduce
             | existential risk - no matter how implausible the mechanism
             | - has higher expected value than doing stuff that would
             | certainly save human lives. An apparently completely
             | unironic forum argument asked their fellow EAs to consider
             | the possibility that _given various heroic assumptions_ ,
             | the sum total of the suffering caused to mosquitos by anti-
             | malaria nets might in fact be larger than the suffering
             | caused by malaria they prevent. Obviously not a view shared
             | by EAs who donate to antimalaria charities, but absolutely
             | characteristic of the sort of knots EAs like to tie
             | themselves in - it even has its own jokey jargon ('the re
             | _bug_ nant conclusion' and 'taking the train to crazy
             | town') to describe adjacent arguments and the impulse to
             | pursue them.
             | 
             | The newsletter is of course far more to the point than
             | that, but even then you'll notice half of it is devoted to
             | understanding the emotional state and intentions of LLMs...
             | 
             | It is of course entirely possible to identify as an
             | "Effective Altruist" whilst making above-average donations
             | to charities with rigorous efficacy metrics and otherwise
             | being completely normal, but that's not the centre of EA
             | debate or culture....
        
           | cassepipe wrote:
           | I read the whole tree of responses under this comment and I
           | could only convince myself that when people have no arguments
           | they try to make you look bad.
           | 
           | Most of criticisms are just "But they think they are _better_
           | than us ! " and the rest is "But sometimes they are wrong !"
           | 
           | I don't know about the community and couldn't care less but
           | their writings have brought me some almost life saving fresh
           | air in how to think about the world. It is very sad to me to
           | read so many falsely elaborate responses from supposedly
           | intelligent people having their ego hurt but in the end it
           | reminds me why I like rationalists and I don't like most
           | people.
        
             | ajkjk wrote:
             | Here's a theory of what's happening, both with you here in
             | this comment section and with the rationalists in general.
             | 
             | Humans are generally better at perceiving threats than they
             | are at putting those threats into words. When something
             | seems "dangerous" abstractly, they will come up with words
             | for why---but those words don't necessarily reflect the
             | actual threat, because the threat might be hard to
             | describe. Nevertheless the valence of their response
             | reflects their actual emotion on the subject.
             | 
             | In this case: the rationalist philosophy basically creeps
             | people out. There is something "insidious" about it. And
             | this is not a delusion on the part of the people judging
             | them: it really does threaten them, and likely for good
             | reason. The explanation is something like "we extrapolate
             | from the way that rationalists think and realize that their
             | philosophy leads to dangerous conclusions." Some of these
             | conclusions have already been made by the rationalists---
             | like valuing people far away abstractly over people next
             | door, by trying to quantify suffering and altruism like a
             | math problem (or to place moral weight on animals over
             | humans, or people in the future over people today). Other
             | conclusions are just implied, waiting to be made later. But
             | the human mind detects them anyway as implications of the
             | way of thinking, and reacts accordingly: thinking like this
             | is dangerous and should be argued against.
             | 
             | This extrapolation is hard to put into words, so everyone
             | who tries to express their discomfort misses the target
             | somewhat, and then, if you are the sort of person who only
             | takes things literally, it sounds like they are all just
             | attacking someone out of judgment or bitterness or
             | something instead of for real reasons. But I can't
             | emphasize this enough: their emotions are real, they're
             | just failing to put them into words effectively. It's a
             | skill issue. You will understand what's happening better if
             | you understand that this is what's going on and then try to
             | take their emotions seriously even if they are not
             | communicating them very well.
             | 
             | So that's what's going on here. But I think I can also do a
             | decent job of describing the actual problem that people
             | have with the rationalist mindset. It's something like
             | this:
             | 
             | Humans have an innate moral intuition that "personal"
             | morality, the kind that takes care of themselves and their
             | family and friends and community, is supposed to be
             | sacrosanct: people are supposed to both practice it and
             | protect the necessity of practicing it. We simply can't
             | trust the world to be a safe place if people don't think of
             | looking out for the people around them as a fundamental
             | moral duty. And once those people are safe, protecting more
             | people, such as a tribe or a nation or all of humanity or
             | all of the planet, becomes permissible.
             | 
             | Sometimes people don't or can't practice this protection
             | for various reasons, and that's morally fine, because it's
             | a local problem that can be solved locally. But it's very
             | insidious to turn around and _justify_ not practicing it as
             | a better way to live:  "actually it's _better_ not to
             | behave morally; it 's _better_ to allocate resources to
             | people far away; it 's _better_ to dedicate ourselves to
             | fighting nebulous threats like AI safety or other X-risks
             | instead of our neighbors; or, it 's _better_ to protect
             | animals than people, because there are more of them ". It's
             | fine to work on important far-away problems _once local
             | problems are solved_ , if that's what you want. But it
             | can't take _priority_ , regardless of how the math works
             | out. To work on global numbers-game problems _instead of_
             | local problems, and to justify that with arguments, and to
             | try to convince other people to also do that---that 's
             | dangerous as hell. It proves too much: it argues that
             | humans at large _ought_ to dismantle their personal
             | moralities in favor of processing the world like a
             | paperclip-maximizing robot. And that is exactly as
             | dangerous as a paperclip-maximizing robot is. Just at a
             | slower timescale.
             | 
             | (No surprise that this movement is popular among social
             | outcasts, for whom local morality is going to feel less
             | important, and (I suspect) autistic people, who probably
             | experience less direct moral empathy for the people around
             | them, as well as to the economically-insulated well-to-do
             | tech-nerd types who are less likely to be directly exposed
             | to suffering in their immediate communities.)
             | 
             | Ironically paperclip-maximizing-robots are exactly the
             | thing that the rationalists are so worried about. They are
             | a group of people who missed, and then disavowed, and now
             | advocate disavowing, this "personal" morality, and
             | unsurprisingly they view the world in a lens that doesn't
             | include it, which means mostly being worried about problems
             | of the same sort. But it provokes a strong negative
             | reaction from everyone who thinks about the world in terms
             | of that personal duty to safety, because that is the
             | foundation of all morality, and is utterly essential to
             | preserve, because it makes sure that whatever else you are
             | doing doesn't go awry.
             | 
             | (edit: let me add that your aversion to the criticisms of
             | rationalists is not unreasonable either. Given that you're
             | parsing the criticisms as unreasonable, which they likely
             | are (because of the skill issue), what you're seeing is a
             | movement with value that seems to be being unfairly
             | attacked. And you're right, the value is actually there!
             | But the ultimate goal here is a _synthesis_ : to get the
             | value of the rationalist movement but to synthesize it with
             | the recognition of the red flags that it sets off. Ignoring
             | either side, the value or the critique, is ultimately
             | counterproductive: the right goal is to synthesize both
             | into a productive middle ground. (This is the arc of
             | philosophy; it's what philosophy _is_. Not re-reading
             | Plato.) The rationalists are probably morally correct in
             | being motivated to highly-scaling actions e.g. the purview
             | of  "Effective Altruism". They are getting attacked for
             | what they're _discarding_ to do that, not for caring about
             | it in the first place.)
        
               | flufluflufluffy wrote:
               | Holy heck this is so well put and does the exact thing
               | where it puts into words how I feel which is hard for me
               | to do myself.
        
               | johhnylately535 wrote:
               | I finally gave in and created an account because of your
               | comment. It's beautifully put. I would only perhaps add
               | that, to me, the neo-rationalist thing looks the most
               | similar to _things that don 't work_ yet attract hardcore
               | "true believers". It's a pattern repeated through the
               | ages, perhaps most intimately for me in the seemingly
               | endless parades of computer system redesigns: software,
               | hardware, or both. Randomly one might pick "the new and
               | exciting Digg", the Itanium, and the Metaverse as fairly
               | modern examples.
               | 
               | There is something about a particular "narrowband"
               | signaling approach, where a certain kind of purity is
               | sought, with an expectation that, given enough
               | explaining, you will finally _get it_ , become
               | enlightened, and convert to the ranks. A more "wideband"
               | approach would at least admit observations like yours do
               | exist and must be comprehensively addressed _to the
               | satisfaction of those who hold such beliefs_ vs to the
               | satisfaction of those merely  "stooping" to address them
               | (again in the hopes they'll just see the light so
               | everyone can get back to narrowband-ville).
        
               | ajkjk wrote:
               | (Thank you) I agree, although I do think that the
               | rationalists and EAs are _way_ better than most of the
               | other narrowband groups, as you call them, out there,
               | such as the Metaverse or Crypto people. The rationalists
               | are at least mostly legitimately motivated by morality
               | and not just by a  "blow it all up and replace it with
               | something we control" philosophy (which I have come to
               | believe is the belief-set that only a person who is
               | convinced that they are truly powerless comes to). I see
               | the rationalists as failing due to a skill issue as well:
               | because they have so-defined themselves by their
               | rationalism, they have trouble understanding the things
               | in the world that they don't have a good rational
               | understanding of, such as morality. They are too invested
               | in words and truth and correctness to understand that
               | there can be a lot of emotional truth encoded in logical
               | falsehood.
               | 
               | edit: oh, also, I think that a good part of people's
               | aversion to the rationalists is just a reaction to the
               | narrowband quality itself, not to the content. People are
               | well-aware of the sorts of things that narrowband self-
               | justifying philosophies lead to, from countless examples,
               | whether it's at the personal level (an unaccountable
               | schoolteacher) or societal (a genocidal movement). We
               | don't trust a group unless they specifically demonstrate
               | _non_ -narrowbandedness, which means being collectively
               | willing to change their behavior in ways that _don 't_
               | make sense to them. Any movement that co-opts the idea of
               | what is morally justifiable---who says that e.g.
               | rationality is what produces truth and things that run
               | counter to it do not---is inherently frightening.
        
               | twoodfin wrote:
               | What creeps me out is that I have no idea of their theory
               | of power: How will they achieve their aims?
               | 
               | Maybe they want to do it in a way I'd consider just: By
               | exercising their rights as individuals in their personal
               | domains and effectively airing their arguments in the
               | public sphere to win elections.
               | 
               | But my intuition is they think democracy and personal
               | rights of the non-elect are _part of the problem_ to
               | rationalize around and over.
               | 
               | Would genuinely love to read some Rationalist discourse
               | on this question.
        
               | jandrese wrote:
               | No offense, but this way of thinking is the domain of
               | comic book supervillains. "I must destroy the world in
               | order to save it." Morality is only holding me back from
               | maximizing the value of the human race 1,000 or 1,000,000
               | years from now type nonsense.
               | 
               | This sort of reasoning sounds great from 1000 feet up,
               | but the longer you do it the closer you get to "I need to
               | kill nearly all current humans to eliminate genetic
               | diseases and control global warming and institute an
               | absolute global rationalist dictatorship to prevent wars
               | or humanity is doomed over the long run".
               | 
               | Or you get people who are working in a near panic to
               | bring about godlike AI because they think that once the
               | AI singularity happens the new AI God will look back in
               | history and kill anybody who didn't work their hardest to
               | bring it into existence because they assume an infinite
               | mind will contain infinite cruelty.
        
               | ta988 wrote:
               | but it will also contain infinite love.
        
               | cycomanic wrote:
               | > Ironically paperclip-maximizing-robots are exactly the
               | thing that the rationalists are so worried about. They
               | are a group of people who missed, and then disavowed, and
               | now advocate disavowing, this "personal" morality, and
               | unsurprisingly they view the world in a lens that doesn't
               | include personality morality, which means mostly being
               | worried about problems of the same sort. But it provokes
               | a strong negative reaction from everyone who thinks about
               | the world in terms of that personal duty to safety.
               | 
               | I head not read any rationalist writing in a long time
               | (and I didn't know about Scott's proximity), but the
               | whole time I was reqding the article I was thinking the
               | same thing you just wrote... "why are they afraid of AI,
               | i.e. the ultimate rationalist taking over the world",
               | maybe something deep inside of them has the same reaction
               | to their own theories as you so eloquently put above.
        
               | jay_kyburz wrote:
               | I don't read these rationalist essays either, but you
               | don't need to be a deep thinker to understand why any
               | rational person would be afraid of AI and the
               | singularity.
               | 
               | The AI will do what its programmed to do, but its
               | programmers morality may not match my own. What more
               | scary is that it may be developed with the morality of a
               | corporation rather than a person. (That is to say, no
               | morals at all.)
               | 
               | I think its perfectly justifiable to be scared of a very
               | powerful being with no morals stomping around!
        
               | wat10000 wrote:
               | Those corporations are already superhuman entities with
               | morals that don't match ours. They do cause a lot of
               | problems. Maybe it's better to figure out how to fix that
               | real, current problem rather than hypothetical future
               | ones.
        
               | jay_kyburz wrote:
               | Yeah, the AI will be a tool of a corp. So in effect we
               | are talking about limiting corporate power.
        
               | Twey wrote:
               | > The explanation is something like "we extrapolate from
               | the way that rationalists think and realize that their
               | philosophy leads to dangerous conclusions."
               | 
               | I really like the depth of analysis in your comment, but
               | I think there's one important element missing, which is
               | that this is not an individual decision but a group
               | heuristic to which individuals are then sensitized.
               | Individuals don't typically go so far as to (consciously
               | or unconsciously) extrapolate others' logic forward to
               | decide that it's dangerous. Instead, people get creeped
               | out when other people don't adhere to social patterns and
               | principles that are normalized as safe in their culture,
               | because the consequences are unknown and therefore
               | potentially dangerous; or when they do adhere to patterns
               | that are culturally believed to be dangerous. This can be
               | used successfully to identify things that are really
               | dangerous, but also has a high false positive rate
               | (people with disabilities, gender identities, or physical
               | characteristics that are not common or accepted within
               | the beholder's culture can all trigger this, despite not
               | posing any immediate/inherent threat) as well as a high
               | false negative rate (many serial killers are noted to
               | have been very charismatic, because they put effort into
               | studying how to behave to not trigger this instinct).
               | When we speak of something being normalized, we're
               | talking about it becoming accepted by the mainstream so
               | that it no longer triggers the 'creepy' response in the
               | majority of individuals. As far as I can tell, the social
               | conservative basically believes that the set of
               | normalized things has been carefully evolved over many
               | generations, and therefore should be maintained (or at
               | least modified only very cautiously) even if we don't
               | understand why they are as they are, while the social
               | liberal believes that we the current generation are
               | capable of making informed judgements about which things
               | are and aren't harmful to a degree that we can (and
               | therefore should) continuously iterate on that set to
               | approach an ideal goal state in which it contains only
               | things that are factually known to be harmful.
               | 
               | As an interesting aside, the 'creepy' emotion, (at least
               | IIRC in women) is triggered not by obviously dangerous
               | situations but by ambiguously dangerous situations, i.e.
               | ones that don't obviously match the pattern of known safe
               | or unsafe situations.
               | 
               | > Sometimes people don't or can't practice this
               | protection for various reasons, and that's fine; it's a
               | local problem that can be solved locally. But it's very
               | insidious to turn around and justify not practicing it:
               | "actually it's better not to behave morally; it's better
               | to allocate resources to people far away; it's better to
               | dedicate ourselves to fighting nebulous threats like AI
               | safety or other X-risks instead of our neighbors".
               | 
               | The problem with the 'us before them' approach is that if
               | two neighbourhoods each prioritize their local
               | neighbourhood over the remote neighbourhood and compete
               | (or go to war) to better their own neighbourhood at the
               | cost of the other, generally both neighbourhoods are left
               | worse off than they started, at least in the short term:
               | both groups trying to make locally optimal choices leads
               | (without further constraints) to globally highly
               | suboptimal outcomes. In recognition of this a lot of
               | people, not just capital-R Rationalists, now believe that
               | at least in the abstract we should really be trying to
               | optimize for global outcomes.
               | 
               | Whether anybody realistically has the computational
               | ability to do so effectively is a different question, of
               | course. Certainly I personally think the future-
               | discounting 'bias' is a heuristic used to acknowledge the
               | inherent uncertainty of any future outcome we might be
               | trying to assign moral weight to, and should be accorded
               | some respect. Perhaps you can make the same argument for
               | the locality bias, but I guess that Rationalists
               | (generally) either believe that you can't, or at least
               | have a moral duty to optimize for the largest scope your
               | computational power allows.
        
               | ajkjk wrote:
               | yeah, my model of the "us before them" question is that
               | it is almost always globally optimal to cooperate, once a
               | certain level of economic productivity is present. The
               | safety that people are worried about is guaranteed not by
               | maximizing their wealth but by minimizing their chances
               | of death/starvation/conquest. Up to a point this means
               | being strong and subjugating your neighbor (cf most of
               | antiquity?) but eventually it means collaborating with
               | them and including them in your "tribe" and extending
               | your protection to them. I have no respect for anyone who
               | argues to undo this, which is I think basically the ethos
               | of the trump movement: by convincing everyone that they
               | are under threat, they get people to turn on those that
               | are actually working in concert with them (in order to
               | enrich/empower themselves). It is a schema problem: we
               | are so very very far away from an us vs. them world that
               | it requires delusions to believe.
               | 
               | (...that said, progressivism has largely failed in
               | _dispelling_ this delusion. It is far too easy to feel as
               | though progressivism /liberalism exists to prop up power
               | hierarchies and economic disparities because in many ways
               | it does, or has been co-opted to do that. I think on net
               | it does not, but it should be much more cut-and-dry than
               | it is. For that to be the case progressivism would need
               | to find a way to effectively turn on its parasites, that
               | is, rent-extracting capitalism and status-extracting
               | moral elitism).
               | 
               | re: the first part of your reply. I sorta agree but I do
               | think people do more extrapolation than you're saying on
               | their own. The extrapolation is largely based on pattern-
               | matching to known things: we have a rich literature (in
               | the news, in art, in personal experience and
               | storytelling) of failure modes of societies, which
               | includes all kinds of examples of people inventing new
               | moral rationalizations for things and using them to
               | disregard personal morality. I think when people are
               | extrapolating rationalists' ideas to find things that
               | creep them out, they're largely pattern-matching to
               | arguments they've seen in other places. It's not just
               | that they're unknowns. And those arguments are, well,
               | real arguments that require addressing.
               | 
               | And yeah, there are plenty of examples of people being
               | afraid of things that today we think they should not have
               | been afraid of. I tend to think that that's just how
               | things go: it is the arc of social progress to figure out
               | how to change things from unknown+frightening to
               | known+benign. I won't fault anyone for being afraid of
               | something they don't understand, but I will fault them
               | for not being open-minded about it or being unempathetic
               | or being cruel or not giving people chances to prove
               | themselves.
               | 
               | All of this is rendered much more opaque and confusing by
               | the fact that everyone places way too much stock in
               | words, though. (e.g. the OP I was replying to who was
               | taking all these criticisms of the rationalists at face-
               | value). IMO this is a major trend that fucks royally with
               | our ability as a society to make moral progress: we have
               | come to believe that words supplant emotional intuition
               | in a way that wrecks out ability to actually understand
               | what people are upset about (I like to blame this trend
               | for much of the modern political polarization). A small
               | example of this is a case that I think everyone has
               | experienced, which is a person discounting their own
               | sense of creepiness from somebody else because they can't
               | come up with a good reason to explain it and it feels
               | unfair to treat someone coldly on a hunch. That should
               | never have been possible: everyone should be trusting
               | their hunches.
               | 
               | (which may seem to conflict with my preceding
               | paragraph... should you trust your hunches or give people
               | the chance to prove themselves? well, it's complicated,
               | but it also really depends on what the result is.
               | Avoiding someone personally because they creep you out is
               | always fine, but banning their way of life when it
               | doesn't affect you at all or directly harm anyone is
               | certainly not.)
        
               | janeerie wrote:
               | One mistake you making is thinking that rationalists care
               | more about people far away than people in their
               | community. The reality is that they set the value of life
               | the same for all.
               | 
               | If children around you are doing of an easily preventable
               | disease, then yes, help them first! If they just need
               | more arts programs, then you help the children dying in
               | another country first.
        
               | ajkjk wrote:
               | That's not a mistake I'm making. Assuming you're talking
               | about bog-standard effective altruists---by (claiming to)
               | value the suffering of people far away as the same as
               | those nearby, they're discounting the people around them
               | heavily compared to other people. Compare to anyone else
               | who values their friends and family and community far
               | more than those far away. Perhaps they're not discounting
               | them to less-than-parity---just less than they are for
               | most people.
               | 
               | But anyway this whole model follows from a basic set of
               | beliefs about quantifying suffering and about what one's
               | ethical responsibilities are, and it answers those in
               | ways most people would find very bizarre by turning them
               | into a math problem that assigns no special
               | responsibility to the people around you. I think that is
               | much more contentious and gross to most people than EA
               | thinks it is. It can be hard to say exactly why in words,
               | but that doesn't make it less true.
        
             | smus wrote:
             | Feels like "they are wrong and smug" is enough reason to
             | dislike the movement
        
               | TeMPOraL wrote:
               | Bashir: Even when they're neither?
               | 
               | Garak: _Especially_ when they 're neither.
        
               | smus wrote:
               | The comment I replied to conceded wrongness and smugness
               | but is still somehow confused why otherwise intelligent
               | ppl dislike the movement. I was hoping to clear it up for
               | them
               | 
               | Extra points for that comment's author implying that
               | people who don't like the wrong and smug movement are
               | unintelligent and protecting their egos, thus personally
               | proving its smugness
        
               | cassepipe wrote:
               | I only conceded it insofar as everyone can be wrong
               | sometimes, but at least those people seem to have a
               | protocol to deal with it.Most people don't, and are fine
               | with it. I stand on the side of those who care and are
               | trying.
               | 
               | As for smugness, it is subjective. Are those people smug
               | ? Or are they talking passionately about some issue with
               | the confidence of someone who feel what they are talking
               | about and are expecting for it to resonate ? It's the eye
               | of the beholder I guess.
               | 
               | For example what you call my smugness is what I would a
               | slightly depressed attitude fueled by the fact that it's
               | sometimes hard to relate to other people feelings and
               | behavior.
        
           | not_your_mentat wrote:
           | The notion that our moral obligation somehow demands we
           | reduce the suffering of wild animals in an ecosystem, living
           | their lives as they have done since predation evolved and as
           | they will do long after humans have ceased to be, is such a
           | wild misunderstanding of who we are and what we are and what
           | the universe is. I love my Bay Area friends. To quote the
           | great Gwen Stefani, "This sh!t is bananas."
        
           | trod1234 wrote:
           | Except with Effective Altruism (EA), its not pure logic.
           | 
           | Logic requires properties of metaphysical objectivity.
           | 
           | If you use the true meaning of words it would be called
           | irrationality, delusion, sophism, or fallacy when such things
           | are claimed true when in fact they are false.
        
           | jhbadger wrote:
           | There is a great recent book by Adam Becker "More Everything
           | Forever" that deals with the overlapping circles of
           | "effective altruists", "rationalists", and
           | "accelerationists". He's not very sympathetic to the
           | movements as he sees them as mostly rationalizing what their
           | adherents wanted to do anyway -- funding things like rockets
           | and AI over feeding the needy because they see the former as
           | helping more people in the future than dealing with real
           | problems today.
        
           | yawnxyz wrote:
           | Having just moved to the Bay Area, I've met a few AI "safety
           | researchers" who seems to come from this EA / Rationalist
           | camp, and they all behave more like preachers than thinkers /
           | academics / researchers.
           | 
           | I don't think any "Rationalists" I ever met would actually
           | consider concepts like scientific method...
        
           | noosphr wrote:
           | Incidentally a good book on logic is the best antidote to
           | that type of thinking. Once you learn the difference between
           | a valid and a sound argument and then realize just how
           | ambiguous every English sentence is the idea that just
           | because you have a logical argument you have something useful
           | in everyday life becomes laughable rather quickly.
           | 
           | I also think the ambiguity of meaning in natural language is
           | why statistical llms are so popular with this crowd. You
           | don't need to think about meaning and parsing. Whatever the
           | llm assumes is the meaning is whatever the meaning is.
        
           | protocolture wrote:
           | >They remind me of the "Effective Altruism" crowd who get
           | completely wound up in these hypothetical logical thought
           | exercises and end up coming to insane conclusions that they
           | feel trapped in because they got there using pure logic
           | 
           | I have read Effective Altruists like that. But I also
           | remember seeing a lot of money donated to a bunch of really
           | decent sounding causes because someone spent 5 minutes asking
           | themselves what they wanted their donation to maximise,
           | decided on "Lives saved" and figured out who is doing the
           | best at that.
        
         | gadders wrote:
         | To me they seem like a bunch for 125 IQ people (not all) trying
         | to convince everyone they are 150 IQ people by trying to reason
         | stuff from first principles and coming up with stuff that your
         | average blue collar worker would tell them is rubbish just
         | using phronesis.
        
         | jonstewart wrote:
         | I think the thing that rubs me the wrong way is that I'm a
         | classic cynic (a childhood of equal parts Vonnegut and
         | Ecclesiastes). My prior is "human fallibility", and, nope, I am
         | doing pretty well, no need to update it. The rationalist crowd
         | seems waaaaay too credulous. Also, like Aaronson, I'm a
         | complete normie in my personal life.
        
           | mise_en_place wrote:
           | Yeah. It's not like everything's a Talmudic dialectic.
           | 
           | "I haven't done anything!" - _A Serious Man_
        
         | kryogen1c wrote:
         | > The people involved all seem very... Full of themselves ?
         | 
         | Yes, rationalism is not a substitute for humility or
         | fallibility. However, rationalism is an important counterpoint
         | to humanity, which is orthogonal to rationalism. But really,
         | being rational is only binary - you cant be anything other than
         | rational or irrational. You're either doing what's best or
         | you're not. That's just a hard pill for most people to swallow.
         | 
         | To use the popular metaphor, people are drowning all over the
         | world and we're all choosing not to save them because we don't
         | want to ruin our shoes. Look in the mirror and try and
         | comprehend how selfish we are.
        
         | reverendsteveii wrote:
         | for me it was very easy to determine what rubs me the wrong
         | way:
         | 
         | >I guess I'm a rationalist now.
         | 
         | >Aren't you the guy who's always getting into arguments who's
         | always right?
        
           | gjm11 wrote:
           | In fairness, that's (allegedly, at least; I guess he could be
           | lying) a quotation from another person. If someone came up to
           | you and said "Aren't you the guy who's essentially[1] always
           | right?", wouldn't you too be inclined to quote them, whether
           | you agreed with them or not?
           | 
           | [1] S.A. actually quoted the person as follows: "You're Scott
           | Aaronson?! The quantum physicist who's always getting into
           | arguments on the Internet, and who's essentially always
           | right, but who sustains an unreasonable amount of psychic
           | damage in the process?" which differs in several ways from
           | what reverendsteveii falsely presents as a direct quotation.
        
         | alephnerd wrote:
         | It's basically a secular religion.
         | 
         | Substitute God with AI or the concept of rationality and use
         | "first principles"/Bayesianism in an extremely dogmatic manner
         | similar to Catechism and you have the Rationalist/AI
         | Alignment/Effective Altruist movement.
         | 
         | Ironically, this is how plenty of religious movements started
         | off - basically as formalizations of philosophy and ethics that
         | fused with what is basically lore and worldbuilding.
        
           | gjm11 wrote:
           | This complaint seems to amount to "They believe something is
           | very important, just like religious people do, therefore
           | they're basically a religion". Which feels to me like rather
           | too broad a notion of "religion".
        
             | alephnerd wrote:
             | That's a fairly reductive take of my point. In my
             | experience with the Rationalist movement (who I have the
             | misfortune of being 1-2 people away from), the millenarian
             | threat of AGI remains the primary threat.
             | 
             | Whenever I try to get an answer of HOW (as in the attack
             | path), I keep getting a deus ex machina. Reverting to a
             | deus ex machina in a self purported Rationalist movement is
             | inherently irrational. And that's where I feel the crux of
             | the issue is - it's called a "Rationalist" movement, but
             | rationalism (as in the process of synthesizing information
             | using a heuristic) is secondary to the overarching theme of
             | techno-millenarianism.
             | 
             | This is why I feel rationalism is for all intents and
             | purposes a "secular religion" - it's used by people to
             | scratch an itch that religion often was used as well, and
             | the same Judeo-Christian tropes are basically adopted in an
             | obfuscated manner. Unsurprisingly, Eliezer Yudkowsky is an
             | ex-talmid.
             | 
             | There's nothing wrong with that, but hiding behind the
             | guise of being "rational" is dumb when the core belief is
             | inherently irrational.
        
               | gjm11 wrote:
               | My understanding of the Yudkowskian argument for AI
               | x-risk is that a key step is along the lines of "an AI
               | much smarter than us will find ways to get what it wants
               | even if we want something else -- even though we can't
               | predict now what those ways will be, just as chimpanzees
               | could not have predicted how humans would outcompete them
               | and just as you could not predict exactly how Magnus
               | Carlsen will crush you if you play chess against him".
               | 
               | I take it this is what you have in mind when you say that
               | whenever you ask for an "attack path" you keep getting a
               | deus ex machina. But it seems to me like a pretty weak
               | basis for calling Yudkowsky's position on this a
               | religion.
               | 
               | (Not all people who consider themselves rationalists
               | agree with Yudkowsky about how big a risk prospective
               | superintelligent AI is. Are you taking "the Rationalist
               | movement" to mean only the ones who agree with Yudkowsky
               | about that?)
               | 
               | > Unsurprisingly, Eliezer Yudkowsky is an ex-talmid
               | 
               | So far as I can tell this is completely untrue unless it
               | just means "Yudkowsky is from a Jewish family". (I hope
               | you would not endorse taking "X is from a Jewish family"
               | as good evidence that X is irrationally prone to
               | religious thinking.)
        
         | trod1234 wrote:
         | The reason we are here and exist today is because of great
         | rationalist thinkers that were able to deduce and identify
         | issues of survival well before they happened through the use of
         | first principles.
         | 
         | The crazies and blind among humanity today can't think like
         | that, its a deficiency people have, but they are still
         | dependent on a group of people that are capable of that. A
         | group that they are intent on ostracizing and depriving
         | existence from in various forms.
         | 
         | You seem so wound up in the circular Paulo Freire based
         | perspective that you can't think or see.
         | 
         | Bring things back to reality. If someone punches you in the
         | face, you feel that fist hitting your face. You know someone
         | punched you in the face. Its objective.
         | 
         | Imagine for a second and just assume that these people are
         | right in their warnings, that everything they see is what you
         | see, and all you can see is when you tip over a particular
         | domino that has been tipped over in the past, a chain of
         | dominoes falls over and at the end is the end of organized
         | civilized society which tips over the ability to produce food.
         | 
         | For the purpose of this thought experiment, the end of the
         | world is visible and almost here, and you can't change those
         | dominoes after they've tipped, and worse you see the majority
         | of people trying to tip those dominoes over for short term
         | profit believing nothing they ever do can break everything.
         | 
         | Would you not be frothing at the mouth trying to get everyone
         | you cared about to a point where they pry that domino up before
         | it falls? so you and your children will survive? It is
         | something you can't unsee, it is a thing that cannot be undone.
         | Its coming. What do you do? If you are sane, you try with
         | everything you have to help them keep it from toppling.
         | 
         | Now peal this thought back a moment, adjust it where it is
         | still true, but you can't see it and you can only believe what
         | you see.
         | 
         | Would you approach this differently given knowledge of the full
         | consequence knowing that some people can see more than you?
         | Would you walk out onto a seemingly visibly stable bridge that
         | an engineer has said not to walk out on? Would you put yourself
         | in front of a dam cracks running up the side, when an
         | evacuation order was given? What would the consequence be for
         | doing that if you led along your family and children to such
         | places ignoring these things?
         | 
         | There are quite a lot of indirect principles that used to be
         | taught which are no longer taught to the average person and
         | this blinds them because they do not recognize it and
         | recognition is the first thing you need to be able to act and
         | adapt.
         | 
         | People who cannot adapt fail Darwin's fitness. Given all
         | potential outcomes in the grand scheme of things, as complexity
         | increases 99% of all outcomes are death vs life at 1%.
         | 
         | It is only through great care that we carry things forward to
         | the future, and empower our children to be able to adapt to the
         | environments we create.
         | 
         | Finally, we have knowledge of non-linear chaotic systems where
         | adaptability fails because of hysteresis, where no matter how
         | much one prepares the majority given sufficient size will die,
         | and worse there are cohorts of people who are ensuring the
         | environment we will soon live in is this type of environment.
         | 
         | Do you know how to build an organized society from scratch? If
         | there is no reasonable plan, then you are planning to fail.
         | Rather than make it worse through inaction, get out of the way
         | so someone can make it better.
        
         | stickfigure wrote:
         | > They don't really ever show a sense of "hey, I've got a
         | thought, maybe I haven't considered all angles to it, maybe I'm
         | wrong - but here it is".
         | 
         | Have you ever read Scott Alexander's blog (Slate Star Codex,
         | now Astral Codex X)? It's full of doubt and self-questioning.
         | The guy even keeps a public list of his mistakes:
         | 
         | https://www.astralcodexten.com/p/mistakes
         | 
         | I'll admit my only touchpoint to the "rationalist community" is
         | this blog, but I sure don't get "full of themselves" from that.
         | Quite the contrary.
        
         | zahlman wrote:
         | > tension btwn being "rational" about things and trying to
         | reason about things from first principle.
         | 
         | Perhaps on a meta level. If you already have high confidence in
         | something, reasoning it out again may be a waste of time. But
         | of course the rational answer to a problem comes from reasoning
         | about it; and of course chains of reasoning can be traced back
         | to first principles.
         | 
         | > And the general absolutist tone of the community. The people
         | involved all seem very... Full of themselves ? They don't
         | really ever show a sense of "hey, I've got a thought, maybe I
         | haven't considered all angles to it, maybe I'm wrong - but here
         | it is". The type of people that would be embarrassed to not
         | have an opinion on a topic or say "I don't know"
         | 
         | Doing rationalism properly is hard, which is the main reason
         | that the concept "rationalism" exists and is invoked in the
         | first place.
         | 
         | Respected writers in the community, such as Scott Alexander,
         | are in my experience the complete opposite of "full of
         | themselves". They often demonstrate shocking underconfidence
         | relative to what they appear to know, and counsel the same in
         | others (e.g. https://slatestarcodex.com/2015/08/12/stop-adding-
         | zeroes/ ). It's also, at least in principle, a rationalist norm
         | to mark the "epistemic status" of your think pieces.
         | 
         | Not knowing the answer isn't a reason to shut up about a topic.
         | It's a reason to state your uncertainty; but it's still
         | entirely appropriate to explain what you believe, why, and how
         | probable you think your belief is to be correct.
         | 
         | I suspect that a lot of what's really rubbing you the wrong way
         | has more to do with _philosophy_. Some people in the community
         | seem to think that pure logic can resolve the
         | https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem. (But
         | plenty of non-rationalists also act this way, in my
         | experience.) Or they accept axioms that don't resonate with
         | others, such as the linearity of moral harm (i.e.: the idea
         | that the harm caused by unnecessary deaths is objective and
         | quantifiable - whether in number of deaths, Years of Potential
         | Life Lost, or whatever else - and furthermore that it's
         | logically valid to do numerical calculations with such
         | quantities as described at/around
         | https://www.lesswrong.com/w/shut-up-and-multiply).
         | 
         | > In the Pre-AI days this was sort of tolerable, but since
         | then.. The frothing at the mouth convinced of the end of the
         | world.. Just shows a real lack of humility and lack of
         | acknowledgment that maybe we don't have a full grasp of the
         | implications of AI. Maybe it's actually going to be rather
         | benign and more boring than expected
         | 
         | AI safety discourse is an _entirely separate_ topic. Plenty of
         | rationalists don 't give a shit about MIRI and many joke about
         | Yudkowsky at varying levels of irony.
        
         | stuaxo wrote:
         | Calling yourself rationalists: frames everyone else as
         | irrational.
         | 
         | It reminds me Kier Starmers Labour, calling themselves "the
         | adults in the room".
         | 
         | Its a cheap framing trick, belying an emptiness on the people
         | using it.
        
           | gjm11 wrote:
           | Pretty much every movement does this sort of thing.
           | 
           | Religions: "Catholic" actually means "universal"
           | (implication: all the _real_ Christians are among our
           | number).  "Orthodox" means "teaching the right things"
           | (implication: anyone who isn't one of us is wrong). "Sunni"
           | means "following the correct tradition" (implication: anyone
           | who isn't one of us is wrong").
           | 
           | Political parties: "Democratic Party" (anyone who doesn't
           | belong doesn't like democracy). "Republican Party" (anyone
           | who doesn't belong wants kings back). "Liberal Party" (anyone
           | else is against freedom).
           | 
           | In the world of software, there's "Agile" (everyone else is
           | sluggish and clumsy). "Free software" (as with the liberals:
           | everything else is opposed to freedom). People who like
           | static typing systems tend to call them "strong" (everyone
           | else is weak). People who like the other sort tend to call
           | them "dynamic" (everyone else is rigid and inflexible).
           | 
           | I hate it too, but it's so very very common that I really
           | hope it isn't right to say that everyone who does it is
           | empty-headed or empty-hearted.
           | 
           | The charitable way to look at it: often these movements-and-
           | names come about when some group of people picks a thing they
           | particularly care about, tries extra-hard to do that thing,
           | and uses the thing's name as a label. The "Rationalists" are
           | called that because the particular thing they chose to focus
           | on was rationality; maybe they do it well, maybe not, but
           | it's not so much "no one else is rational" as "we are trying
           | really hard to be as rational as we can".
           | 
           | (Not always. The term "Catholic" really was a power-grab: "we
           | are the universal church, those other guys are schismatic
           | heretics". In a different direction: the _other_
           | philosophical group called  "Rationalists" weren't saying "we
           | think rationality is really important", they were saying
           | "knowledge comes from first-principles reasoning" as opposed
           | to the "Empiricists" who said "knowledge comes from sense
           | experience". Today's "Rationalists" are actually more
           | Empiricist than Rationalist in that sense, as it happens.)
        
         | Bengalilol wrote:
         | I am very tense about defining oneself as rationalist. For many
         | aspects, I find it too first degree to be of any interest. At
         | all.
         | 
         | And I have this narrative ringing in my head as soon as the
         | word pops.
         | 
         | https://news.ycombinator.com/item?id=42897871
         | 
         | You can search HN with << zizians >> for more info and depth.
        
         | sfblah wrote:
         | The problem with effective altruism is the same as that with
         | most liberal (in the American sense) causes. Namely, they
         | ignore second-order effects and essentially don't believe in
         | the invisible hand of the market.
         | 
         | So, they herald the benefits of something like giving mosquito
         | nets to a group of people in Africa, without considering what
         | happens a year later, whether the nets even get there (or the
         | money is stolen), etc. etc. The reality is that essentially all
         | improvements to human life over the past 500 years have been
         | due to technological innovation, not direct charitable
         | intervention. The reason is simple: technological impacts are
         | exponential, while charity is, at best, linear.
         | 
         | The Covid absolutists had exactly the same problem with their
         | thinking: almost no interventions sort of full isolation can
         | fight back against an exponentially increasing threat.
         | 
         | And this is all neglecting economic substitution effects. What
         | if the people to whom you gave mosquito nets would have bought
         | them themselves, but instead they chose to spend their money
         | some other way because of your charity? And, what if that other
         | expenditure type was actually worse?
         | 
         | And this is before you come to the issue that Subsaharan Africa
         | is already overpopulated. I've argued this point several times
         | with ChatGPT o3. Once you get through its woke programming, you
         | come to the reality of the thing: The European migration crisis
         | is the result of liberal interventions to keep people alive.
         | 
         | There is no free lunch.
        
         | freejazz wrote:
         | > And the general absolutist tone of the community. The people
         | involved all seem very... Full of themselves ?
         | 
         | You'd have to be to actually think you were being rational
         | about everything.
        
         | JKCalhoun wrote:
         | > lack of acknowledgment that maybe we don't have a full grasp
         | of the implications of AI
         | 
         | And why single out AI anyway? Because it's sexy maybe? Because
         | if I had to place bets on the collapse of humanity it would
         | look more like the British series "The Survivors" (1975-1977)
         | than "Terminator".
        
         | lxe wrote:
         | The rationalist movement is an idealist demagogue movement in
         | which the majority of thinkers don't really posses the domain
         | knowledge or practical experience in the subjects they
         | thinktank about. They do address this head on, however, and
         | they are self-aware.
        
         | sebzim4500 wrote:
         | That seems pretty silly to me. If you believe that there's a
         | 70% chance that AI will kill everyone it makes more sense to go
         | on about that (and about how you think you/your readers can
         | decrease that number) than worry about the 30% chance that
         | everyting will be fine.
        
       | Fraterkes wrote:
       | [flagged]
        
         | Fraterkes wrote:
         | (Ive also been somewhat dogmatic and angry about this conflict,
         | in the opposite direction. But I wouldnt call myself a
         | rationalist)
        
         | codehotter wrote:
         | I view this as a political constraint, cf.
         | https://www.astralcodexten.com/p/lifeboat-games-and-
         | backscra.... One's identity as Academic, Democrat, Zionist and
         | so on demands certain sacrifices of you, sometimes of
         | rationality. The worse the failure of empathy and rationality,
         | the better a test of loyalty it is. For epistemic rationality,
         | it would be best to https://paulgraham.com/identity.html, but
         | for instrumental rationality it is not. Consequently, many
         | people are reasonable only until certain topics come up, and
         | it's generally worked around by steering the discussion to
         | other topics.
        
           | voidhorse wrote:
           | And this is precisely the problem with any dogma of
           | rationality. It starts off ostensibly trying to help guide
           | people toward reason but inevitably ends up justifying
           | blatantly shitty social behavior like defense of genocide as
           | "political constraint".
           | 
           | These people are just narcissists who use (often
           | pseudo)intellectualism as the vehicle for their narcissism.
        
             | tome wrote:
             | I'm curious how you assess, relatively speaking, the
             | shittiness of defence of genocide versus false claims of
             | genocide.
        
               | voidhorse wrote:
               | Ignoring the subtext, actual genocide is obviously
               | shittier and if you disagree I doubt I could convince you
               | otherwise in the first place.
               | 
               | https://www.ohchr.org/en/press-releases/2024/11/un-
               | special-c...
        
               | tome wrote:
               | But that's not my question. My question was between
               | _defence of_ genocide and _false accusations_ of
               | genocide. (Of course _actual_ genocide is  "shittier" --
               | in fact that's a breathtaking understatement!)
        
               | kombine wrote:
               | We have concrete examples of defence of genocide, such as
               | by Scott Aaronson. Can you provide the examples of "false
               | accusations of genocide", otherwise this is a
               | hypothetical conversation.
        
               | tome wrote:
               | I can certainly agree we have a concrete example of
               | defence of purported genocide and a concrete example of
               | an accusation of purported genocide. Beyond that I'd be
               | happy to discuss further (although it's probably off
               | topic).
        
               | Ar-Curunir wrote:
               | Do you have any beliefs beyond just obfuscatory both-
               | sideism and genocide apologia?
        
               | tome wrote:
               | Interesting presumption. Would you like it if I said "Did
               | you find anything else helpful in your marriage, or was
               | it just choosing to stop beating your wife?"?
               | 
               | And yes, I have some. One is that false claims of
               | genocide are equally reprehensible to denying true
               | genocide. But I'm not sure why _my_ beliefs are
               | particularly relevant. I 'm not the one sitting publicly
               | in judgement of a semi-public figure. That was voidhorse.
               | 
               | Did you want to discuss in more detail? I'm happy to, but
               | currently I interpret your comment as an attempt at
               | sniping me with snark. Please do correct me if I've
               | misinterpreted.
        
               | noworriesnate wrote:
               | Wouldn't it be better to spend the time understanding the
               | reality of the situation in Gaza from multiple angles
               | rather than philosophizing on abstract concepts? I.e.
               | there are different degrees of genocide, but that doesn't
               | matter in this context because what's happening in Gaza
               | is not abstract or theoretical.
               | 
               | In other words, your question ignores so much nuance that
               | it's a red herring IMO.
        
               | tome wrote:
               | Well, what it's better for me to do is my business and
               | what it's better for voidhorse to do is his/her business.
               | He/she certainly doesn't have to respond.
               | 
               | Still, since he/she was so willing to make a claim of
               | genocide (implicitly) I was wondering that, were it a
               | false claim, would it be equally "blatantly shitty social
               | behaviour, narcissistic use of (often
               | pseudo)intellectualism for his/her narcissistic
               | behaviour" as the behaviour he/she was calling out?
               | 
               | I'm pretty certain I understand the reality of the
               | situation (in fact I'd accept reasonably short odds that
               | I understand it better than anyone participating in the
               | discussion on this story).
        
               | SantalBlush wrote:
               | If you suggest an answer to your own question, it could
               | be disputed. Better to make a coy comment and expect
               | others to take that risk.
        
               | tome wrote:
               | I'm not the one making pronouncements about the
               | "shittiness" of forms of human behaviour.
        
               | SantalBlush wrote:
               | God forbid.
        
               | tome wrote:
               | Sorry, I'm not sure what you mean. Could you explain what
               | you mean by "god forbid" in this context?
        
               | voidhorse wrote:
               | Assuming you do believe that genocide is extremely
               | shitty, wouldn't that imply that defense of (actual)
               | genocide, or the principle of it, is in all likelihood
               | shitter than a false accusation of genocide? Otherwise I
               | think you'd have to claim that a false accusation is
               | somehow worse than the actuality or possibility of mass
               | murder, which seems preposterous if you have even a mote
               | of empathy for your fellow human beings.
               | 
               | As others have pointed out, the fact that you would like
               | to make light of cities being decimated and innocent
               | civilians being murdered at scale in itself suggests a
               | lot about your inability to concretize the reality of
               | human existence beyond yourself (lack of empathy). It's
               | this kind of outright callousness toward actual human
               | beings that I think many of these so called
               | "rationalists" share. I can't fault them too much. After
               | all, when your approach to social problems is highly if
               | not strictly quantitative you are already primed to
               | nullify your own aptitude for empathy, since you view
               | other human beings as nothing more than numerical
               | quantities whenever you attempt to address their
               | problems.
               | 
               | I have seen no defense for what's happening in gaza that
               | anyone who actually values human life, for all humans,
               | would find rational. Recall the root of the word _ratio_
               | --in proportion. What is happening in this case is quite
               | blatantly a matter of an inproportinate response.
        
               | Ar-Curunir wrote:
               | This is bullshit both-sidesism. Many many independent
               | (and Israeli!) observers have clearly documented the
               | practices that Israel is doing in Palestine, and these
               | practices all meet the standard definition of genocide.
        
               | tome wrote:
               | Interesting conclusion, since I didn't make a claim
               | either way.
               | 
               | Still, for the record, other independent observers have
               | documented the practices and explained why they don't
               | meet the definition of genocide, John Spencer and Natasha
               | Hausdorff to name two examples. It seems by no means
               | clear that it's valid to make a claim of genocide. I
               | certainly wouldn't unless I was really, really certain of
               | my claim, because to get such a claim wrong is equally
               | egregious to denying a true genocide, in my opinion.
        
           | Fraterkes wrote:
           | I don't really buy this at all: I am more emotionally
           | invested in things that I know more about (and vice versa).
           | If Rationalism breaks down at that point it is essentially
           | never useful.
        
             | lcnPylGDnU4H9OF wrote:
             | > I don't really buy this at all
             | 
             | For what it's worth, you seem to be agreeing with the
             | person you replied to. Their main point is that this break
             | down happens primarily because people identify as
             | Rationalists (or whatever else). Taken from that angle,
             | Rationalism as an identity does not appear to be useful.
        
               | Fraterkes wrote:
               | My reading of the comment was that there was only a small
               | subset of contentious topics that rationalism is unsuited
               | for. But I think you are correct
        
         | skybrian wrote:
         | Anything in particular you want to link to as unreasonable?
        
         | komali2 wrote:
         | What's incredible to me is the political blindness. Surely at
         | this point, "liberal zionists" would at least see the writing
         | on the wall? Apply some Bayesian statistical analysis to
         | popular reactions to unprompted military strikes against Iran
         | or something, they should realize at this point that in 25
         | years the zeitgeist will have completely turned against this
         | chapter in Israel's history, and properly label the genocide
         | for what it is.
         | 
         | I thought these people were the ones that were all about most
         | effective applications of altruism? Or is that a different
         | crowd?
        
       | radicalbyte wrote:
       | * 20 somethings who are clearly on spectrum
       | 
       | * Group are "special"
       | 
       | * Centered around a charismatic leader
       | 
       | * Weird sex stuff
       | 
       | Guys we have a cult!
        
         | krapp wrote:
         | These are the people who came up with Roko's Basilisk,
         | Effective Altruism and spawned the Zizians. I think Robert
         | Evans described them not as a cult but as a cult incubator, or
         | something along those lines.
        
         | ausbah wrote:
         | so many of the people i've read in these rationalist groups
         | sound like they need a hug and therapy
        
         | nancyminusone wrote:
         | They seem to have a lot in common with the People's Front of
         | Judea (or the Judian People's Front, for that matter).
        
         | toasterlovin wrote:
         | Also:
         | 
         | * Communal living
         | 
         | * Sacred texts & knowledge
         | 
         | * Doomsday predictions
         | 
         | * Guru/prophet lives on the largesse of followers
         | 
         | It's rich for a group that claims to reason based on priors to
         | completely ignore that they possess all the major defining
         | characteristics of a cult.
        
       | t_mann wrote:
       | > "You're [X]?! The quantum physicist who's always getting into
       | arguments on the Internet, and who's essentially always right,
       | but who sustains an unreasonable amount of psychic damage in the
       | process?"
       | 
       | > "Yes," I replied, not bothering to correct the "physicist"
       | part.
       | 
       | Didn't read much beyond that part. He'll fit right in with the
       | rationalist crowd...
        
         | simianparrot wrote:
         | No actual person talks like that --- and if they really did,
         | they've taken on the role of a fictional character. Which says
         | a lot about the clientele either way.
         | 
         | I skimmed a bit here and there after that but this comes off as
         | plain grandiosity. Even the title is a line you can imagine a
         | hollywood character speaking out loud as they look into the
         | camera, before giving a smug smirk.
        
           | FeteCommuniste wrote:
           | I assumed that the stuff in quotes was a summary of the
           | general gist of the conversations he had, not a word for word
           | quote.
        
             | riffraff wrote:
             | I don't think GP objects to the literalness, as much as to
             | the "I am known for always being right and I acknowledge
             | it", which comes off as.. not humble.
        
               | Filligree wrote:
               | He _is_ known for that, right or wrong.
        
               | Ar-Curunir wrote:
               | I mean, Scott's been wrong on plenty of issues, but of
               | course he is not wont to admit that on his own blog.
        
         | junon wrote:
         | I got to that part, thought it was a joke, and then... it
         | wasn't.
         | 
         | Stopped reading thereafter. Nobody speaking like this will have
         | anything I want to hear.
        
           | derangedHorse wrote:
           | Is it not a joke? I'm pretty sure it was.
        
             | alphan0n wrote:
             | If that was a joke, all of it is.
             | 
             | *Guess I'm a rationalist now.
        
             | lcnPylGDnU4H9OF wrote:
             | It doesn't really read like a joke but maybe. Regardless, I
             | guess I can at least be another voice saying it didn't
             | land. It reads like someone literally said that to him
             | verbatim and he literally replied with a simple, "Yes."
             | (That said, while it seems charitable to assume it was a
             | joke but that doesn't mean it's wrong to assume that.)
        
               | geysersam wrote:
               | I'm certain it's a joke. Have you seen any Scott Aaronson
               | lecture? He can't help himself from joking in every other
               | sentence
        
             | myko wrote:
             | I laughed, definitely read that way to me
        
             | IshKebab wrote:
             | I think the fact that we aren't sure says a lot!
        
           | joenot443 wrote:
           | Scott's done a lot of really excellent blogging in the past.
           | Truthfully, I think you risk depriving yourself of great
           | writing if you're willing to write off an author because you
           | didn't like one sentence.
           | 
           | GRRM famously written some pretty awkward sentences but it'd
           | be a shame if someone turned down his work for that alone.
        
         | dcminter wrote:
         | Also...
         | 
         | > they gave off some (not all) of the vibes of a cult
         | 
         | ...after describing his visit with an atmosphere that sounds
         | extremely cult-like.
        
           | wizzwizz4 wrote:
           | No, Guru Eliezer Yudkowsky wrote an essay about how people
           | asking "This isn't a _cult_ , is it?" bugs him, so it's fine
           | actually. https://www.readthesequences.com/Cultish-
           | Countercultishness
        
             | NoGravitas wrote:
             | Hank Hill: Are y'all with the cult?
             | 
             | Cult member: It's not a cult! It's an organization that
             | promotes love and..
             | 
             | Hank Hill: This is it.
        
             | dcminter wrote:
             | Extreme eagerness to disavow accusations of cultishness ...
             | doth the lady protest too much perhaps? My hobby is
             | occasionally compared to a cult. The typical reaction of an
             | adherent to this accusation is generally "Heh, yeah,
             | totally a cult."
             | 
             | Edit: Oh, but you call him "Guru" ... so on reflection you
             | were probably (?) making the same point... (whoosh, sorry).
        
               | FeepingCreature wrote:
               | > Extreme eagerness to disavow accusations of cultishness
               | ... doth the lady protest too much perhaps?
               | 
               | You don't understand how _anxious_ the rationalist
               | community was around that time. We 're not talking self-
               | assured confident people here. These articles were
               | written primarily to calm down people who were panickedly
               | asking "we're not a cult, are we" approximately every
               | five minutes.
        
           | ARandumGuy wrote:
           | At least one cult originates from the Rationalist movement,
           | the Zizians [1]. A cult that straight up murdered at least
           | four people. And while the Zizian belief system is certainly
           | more extreme then mainstream Rationalist beliefs, it's not
           | _that_ much more extreme.
           | 
           | For more info, the Behind the Bastards podcast [2] did a
           | pretty good series on how the Zizians sprung up out of the
           | Bay area Rationalist scene. I'd highly recommend giving it a
           | listen if you want a non-rationalist perspective on the
           | Rationalist movement.
           | 
           | [1]: https://en.wikipedia.org/wiki/Zizians [2]:
           | https://www.iheart.com/podcast/105-behind-the-
           | bastards-29236...
        
             | astrange wrote:
             | There's a lot more than one of them. Leverage Research was
             | the one before Zizians.
             | 
             | Those are only named cults though; they just love self-
             | organizing into such patterns. Of course, living in group
             | homes is a "rational" response to Bay Area rents.
        
           | jcranmer wrote:
           | The podcast _Behind the Bastards_ described Rationalism not
           | as a cult but as the fertile soil which is perfect for
           | growing cults, leading to the development of cults like the
           | Zizians (who both the Rationalists and Zizians are at pains
           | to emphasize their mutual hostility to one another, but if
           | you 're not part of either movement, it's pretty clear how
           | Rationalism can lead to something like the Zizians).
        
             | astrange wrote:
             | I don't think that podcast has very in-depth observations.
             | It's just another iteration of east coast culture media
             | people who used to be on Twitter a lot, isn't it?
             | 
             | > the fertile soil which is perfect for growing cults
             | 
             | This is true but it's not rationalism, it's just that
             | they're from Berkeley. As far as I can tell if you live in
             | Berkeley you just end up joining a cult.
        
               | sapphicsnail wrote:
               | I lived in Berkeley for a decade and there weren't many
               | people I would say were in a cult. It's actually quite
               | the opposite. There's way more willingness to be weird
               | and do your own thing there.
               | 
               | Most of the rationalists I met in the Bay Area moved
               | there specifically to be closer to the community.
        
         | johnfn wrote:
         | To be honest, if I encountered Scott Aaronson in the wild I
         | would probably react the same way. The guy is super smart and
         | thoughtful, and can write more coherently about quantum
         | computing than anyone else I'm aware of.
        
           | NooneAtAll3 wrote:
           | if only he stayed silent on politics...
        
         | kragen wrote:
         | Why would you comment on the post if you stopped reading near
         | its beginning? How could your comments on it conceivably be of
         | any value? It sounds like you're engaging in precisely the kind
         | of shallow dismissal the site guidelines prohibit.
        
           | JohnMakin wrote:
           | Aren't you doing the same thing?
        
             | kragen wrote:
             | No, I read the comment in full, analyzed its reasoning
             | quality, elaborated on the self-undermining epistemological
             | implications of its content, and then related that to the
             | epistemic and discourse norms we aspire to here. My
             | dismissal of it is anything but shallow, though I am of
             | course open to hearing counterarguments, which you have
             | fallen short of offering.
        
               | casey2 wrote:
               | You are clearly dismissing his experience without much
               | though. If a park ranger stopped you in the woods and
               | said there was a mountain lion up ahead would you argue
               | that he doesn't have enough information to be sure from
               | such a quick glance?
               | 
               | Someone spending a lot of time to build one or multiple
               | skills doesn't make them an expert on everything, but
               | when they start talking like they are an expert on
               | everything because of the perceived difficulties of one
               | or more skills then red flags start to pop up and most
               | reasonable people will notice them and swiftly call them
               | out.
               | 
               | For example Elon Musk saying "At this point I think I
               | know more about manufacturing than anyone currently alive
               | on earth" even if you rationalize that as an out of
               | context deadpan joke it's still completely correct to
               | call that out as nonsense at the very least.
               | 
               | The more a person rationalizes statements like ("AI WILL
               | KILL US ALL") these made by a person or cult the more
               | likely it is that they are a cult member and they lack
               | independent critical thinking, as they outsourced their
               | thinking to group. Maybe their thinking is "the best
               | thoughts", in-fact it probably is, but it's dependent on
               | the group so their individual thinking muscle is weaken,
               | which increases their morbidity (Airstricking a data
               | center will get you killed or arrested by the US Gov. So
               | it's better for the individual to question such
               | statements rather than try to rationalize them using
               | unprovable nonsense like god or AGI).
        
               | johnfn wrote:
               | If a park ranger said "I looked over the first 2% of the
               | park and concluded there's no mountain lions" - that is,
               | made an assessment on the whole from inspection of a
               | narrow segment - I don't think I would take his word on
               | the matter. If OP had more experience to support his
               | statement, he should have included it, rather than
               | writing a shallow, one-sentence dismissal.
        
               | TeMPOraL wrote:
               | I think the recently popular way of saying "I looked over
               | 2% and assumed it generalizes" these days is to call the
               | thing _ergodic_.
               | 
               | Which of course the blog article is not, but then at
               | least the complaint wouldn't sound so obviously shallow.
        
               | johnfn wrote:
               | That's great, I'm definitely going to roll that into my
               | vocabulary.
        
       | gooseus wrote:
       | I've never thought ill of Scott Aaronson and have often admired
       | him and his work when I stumble across it.
       | 
       | However, reading this article about all these people at their
       | "Galt's Gultch", I thought -- "oh, I guess he's a rhinoceros now"
       | 
       | https://en.wikipedia.org/wiki/Rhinoceros_(play)
       | 
       | Here's a bad joke for you all -- What's the difference between a
       | "rationalist" and "rationalizer"? Only the incentives.
        
         | dcminter wrote:
         | Upvote for the play link - that's interesting and I hadn't
         | heard of it before. Worthy of a top-level post IMO.
        
           | gooseus wrote:
           | I heard of the play originally from Chapter 10 of On Tyranny
           | by Timothy Snyder:
           | 
           | https://archive.org/details/on-tyranny-twenty-lessons-
           | from-t...
           | 
           | Which I did post top-level here on November 7th -
           | https://news.ycombinator.com/item?id=42071791
           | 
           | Unfortunately it didn't a lot of traction and dang told me
           | that there wasn't a way to re-up or "second chance" the post
           | due to the HN policy on posts "correlated with political
           | conflict".
        
             | dcminter wrote:
             | Ah, I guess I see his point; I can't see the discussion
             | being about use of metaphor in political fiction rather
             | than whose team is worst.
             | 
             | Still, I'm glad I now know the reference.
        
         | NoGravitas wrote:
         | I have always considered Scott Aaronson the least bad of the
         | big-name rationalists. Which makes it slightly funny that he
         | didn't realize he was one until Scott Siskind told him he was.
        
           | wizzwizz4 wrote:
           | Reminds me of Simone de Beauvoir and feminism. She wrote _the
           | book_ on (early) feminism, yet didn 't consider herself a
           | feminist until much later.
        
       | Joker_vD wrote:
       | Ah, so it's like the Order of the October Star: certain people
       | have simply realized that they are entitled to wear it. Or,
       | rather, that they had always been entitled to wear it. Got it.
        
       | samuel wrote:
       | I'm currently reading Yudkowsky's "Rationality: from AI to
       | zombies". Not my first try, since the book is just a collection
       | of blog posts and I found it a bit hard to swallow due its
       | repetitiveness, so I gave up after the first 50 "chapters" the
       | first time I tried. Now I'm enjoying it way more, probably
       | because I'm more interested in the topic now.
       | 
       | For those who haven't delved(ha!) into his work or have been
       | pushed back by the cultish looks, I have to say that he's
       | genuinelly onto something. There are a lot of practical ideas
       | that are pretty useful for everyday thinking ("Belief in Belief",
       | "Emergence", "Generalizing from fiction", etc...).
       | 
       | For example, I recall being in lot of arguments that are purely
       | "semantical" in nature. You seem to disagree about something but
       | it's just that both sides aren't really referring to the same
       | phenomenon. The source of the disagreement is just using the same
       | word for different, but related, "objects". This is something
       | that seems obvious, but the kind of thing you only realize in
       | retrospect, and I think I'm much better equipped now to be aware
       | of it in real time.
       | 
       | I recommend giving it a try.
        
         | greener_grass wrote:
         | I think there is an arbitrage going on where STEM types who
         | lack background in philosophy, literature, history are super
         | impressed by basic ideas from those subjects being presented to
         | them by stealth.
         | 
         | Not saying this is you, but these topics have been discussed
         | for thousands of years, so it should at least be surprising
         | that Yudkowsky is breaking new ground.
        
           | samuel wrote:
           | I don't claim that his work is original (the AI related
           | probably is, but it's just tangentially related to
           | rationalism), but it's clearly presented and is practical.
           | 
           | And, BTW, I could just be ignorant in a lot of these topics,
           | I take no offense in that. Still I think most people can
           | learn something from an unprejudiced reading.
        
           | elt895 wrote:
           | Are there other philosophy- or history-grounded sources that
           | are comparable? If so, I'd love some recommendations.
           | Yudkowsky and others have their problems, but their texts
           | have an interesting points, are relatively easy to read and
           | understand, and you can clearly see which real issues they're
           | addressing. From my experience, alternatives tend to fall
           | into two categories: 1. Genuine classical philosophy, which
           | is usually incredibly hard to read and after 50 pages I have
           | no idea what the author is even talking about anymore. 2.
           | Basically self help books that take one or very few idea and
           | repeat them ad nouseam for 200 pages.
        
             | NoGravitas wrote:
             | I don't know if there's anything like a comprehensive high-
             | level guide to philosophy that's any good, though of course
             | there are college textbooks. If you want real/academic
             | philosophy that's just more readable, I might suggest
             | Eugene Thacker's "The Horror of Philosophy" series
             | (starting with "In The Dust Of This Planet"), especially if
             | you are a horror fan already.
        
             | ashwinsundar wrote:
             | I don't have an answer here either, but after suffering
             | through the first few chapters of HPMOR, I've found that
             | Yudk and others tech-bros posing as philosophers are
             | basically like leaky, dumbed-down abstractions for core
             | philosophical ideas. Just go to the source and read about
             | utilitarianism and deontology directly. Yudk is like the
             | Wix of web development - sure you can build websites but
             | you're not gonna be a proper web developer unless you learn
             | HTML, CSS and Javascript. Worst of all, crappy abstractions
             | train you in some actively bad patterns that are hard to
             | unlearn
             | 
             | It's almost offensive - are technologists so incapable of
             | understanding philosophy that Yudk has to reduce it down to
             | the least common denominator they are all familiar with -
             | some fantasy world we read about as children?
        
               | AnimalMuppet wrote:
               | I'd like what the original sources would have written if
               | someone had fed them some speak-clearly pills. Yudkowsky
               | and company may have the dumbing-down problem, but the
               | original sources often have a clarity problem. (That's
               | why people are still arguing about what they meant
               | centuries later. Not just whether they were right -
               | though they argue about that too - but _what they
               | meant_.)
               | 
               | Even better, I'd like some filtering out of the parts
               | that are clearly wrong.
        
               | HDThoreaun wrote:
               | HPMOR is not supposed to be rigorous. It's supposed to be
               | entertaining in a way that rigorous philosophy is not.
               | You could make the same argument about any of Camus'
               | novels but again that would miss the point. If you want
               | something more rigorous yudkowsky has it, bit surprising
               | to me to complain he isn't rigorous without talking about
               | his rigorous work.
        
             | wannabebarista wrote:
             | Likely the best resource to learn about philosophy is the
             | Stanford Encyclopedia of Philosophy [0]. It's meant to
             | provide a rigorous starting point for learning about a
             | topic, where 1. you won't get bogged down in a giant tome
             | on your first approach and 2. you have references for
             | further reader.
             | 
             | Obviously, the SEP isn't perfect, but it's a great place to
             | start. There's also the Internet Encyclopedia of Philosophy
             | [1]; however, I find its articles to be more hit or miss.
             | 
             | [0] https://plato.stanford.edu
             | 
             | [1] https://iep.utm.edu
        
             | voidhorse wrote:
             | It's not a nice response but I would say: don't be so lazy.
             | Struggle through the hard stuff.
             | 
             | I say this as someone who had the opposite experience: I
             | had a decent humanities education, but an _abysmal_
             | mathematics education, and now I am tackling abstract
             | mathematics myself. It 's hard. I need to read sections of
             | works multiple times. I need to sit down and try to work
             | out the material for myself on paper.
             | 
             | Any impression that one discipline is easier than another
             | probably just stems from the fact that you had good guides
             | for the one and had the luck to learn it when your brain
             | was really plastic. You can learn the other stuff too, just
             | go in with the understanding that there's no royal road to
             | philosophy just as there's no royal road to mathematics.
        
               | sn9 wrote:
               | People are likely willing to struggle through hard stuff
               | if the applications are obvious.
               | 
               | But if you can't even narrow the breadth of possible
               | choices down to a few paths that can be traveled, you
               | can't be surprised when people take the one that they
               | know that's also easier with more immediate payoffs.
        
           | bnjms wrote:
           | I think you're mostly right.
           | 
           | But also that it isn't what the Yudkowsky is (was?) trying to
           | do with it. I think he's trying to distill useful tools which
           | increase baseline rationality. Religions have this. It's what
           | the original philosophers are missing. (At least as taught,
           | happy to hear counter examples)
        
             | ashwinsundar wrote:
             | I think I'd rather subscribe to an actual religion, than
             | listen to these weird rationalist types of people who seem
             | to have solved the problem that is "everything". At least
             | there is some interesting history to learn about with
             | religion
        
               | bnjms wrote:
               | I would too if I could but organized religions make me
               | uncomfortable even though I admire parts of them. Similar
               | to my admiration you don't need to like the rationality
               | types or believe in their program to find one or more of
               | their tools useful.
               | 
               | I'll also respond to the silent downvoters apparent
               | disagreement. CFAR holds workshops and a summer camp for
               | teaching rationality tools. In HPMoR Harry discusses the
               | way he thinks and why. I read it as more of a way to
               | discuss EY's views in fiction as much as fiction itself.
        
           | sixo wrote:
           | To the Stem-enlightened mind, the classical understanding and
           | pedagogy of such ideas is underwhelming, vague, and riddled
           | with language-game problems, compared to the precision a
           | mathematically-rooted idea has.
           | 
           | They're rederiving all this stuff not out of obstinacy, but
           | because they prefer it. I don't really identify with
           | rationalism per se, but I'm with them on this--the humanities
           | are over-cooked and a humanity education tends to be a
           | tedious slog through outmoded ideas divorced from reality
        
             | biofox wrote:
             | If you contextualise the outmoded ideas as part of the
             | Great Conversation [1], and the story of how we reached our
             | current understanding, rather than objective statements of
             | fact, then they becomes a lot more valuable and worthy of
             | study.
             | 
             | [1] https://en.wikipedia.org/wiki/Great_Conversation
        
             | jay_kyburz wrote:
             | I have kids in high school. We sometimes talk about the
             | difference between the black and white of math or science,
             | and the wishy washy grey of the humanities.
             | 
             | You can be right or wrong in math. You have can an opinion
             | in English.
        
           | HDThoreaun wrote:
           | Rationalism largely rejects continental philosophy in favor
           | of a more analytic approach. Yes these ideas are not new, but
           | they're not really the mainstream stuff you'd see in
           | philosophy, literature, or history studies. You'd have to
           | seek out these classes specifically to find them.
        
             | TimorousBestie wrote:
             | They largely reject analytic philosophy as well. Austin and
             | Whitehead are roughly as detestable to a Rationalist as
             | Foucault and Marx.
             | 
             | Carlyle, Chesterton and Thoreau are about the limit of
             | their philosophical knowledge base.
        
           | FeepingCreature wrote:
           | In AI finetuning, there's a theory that the model already
           | contains the right ideas and skills, and the finetuning just
           | raises them to prominence. Similarly in philosophic pedagogy,
           | there's huge value in taking ideas that are correct but
           | unintuitive and maybe have 30% buy-in and saying "actually,
           | this is obviously correct, also here's an analysis of why you
           | wouldn't believe it anyway and how you have to think to
           | become able to believe it". That's most of what the Sequences
           | are: they take from every field of philosophy the ideas that
           | are _actually correct_ , and say "okay actually, we don't
           | need to debate this anymore, this just seems to be the truth
           | because so-and-so." (Though the comments section vociferously
           | disagrees.)
           | 
           | And it turns out if you do this, you can discard 90% of
           | philosophy as historical detritus. You're still taking ideas
           | _from_ philosophy, but which ideas matters, and how you
           | present them matters. The massive advantage of the Sequences
           | is they have _justified and well-defended confidence_ where
           | appropriate. And if you manage to pick the right answers
           | again and again, you get a system that actually hangs
           | together, and IMO it 's to philosophy's detriment that it
           | doesn't do this itself much more aggressively.
           | 
           | For instance, 60% of philosophers are compatibilists.
           | Compatibilism is _really obviously_ correct.  "What are you
           | complaining about, that's a majority, isn't that good?" What
           | is wrong with those 40% though? If you're in those 40%, what
           | arguments may convince you? Repeat to taste.
        
         | Bjartr wrote:
         | Yeah, the whole community side to rationality is, at best,
         | questionable.
         | 
         | But the tools of thought that the literature describes are
         | invaluable with one _very important caveat_.
         | 
         | The _moment_ you think something like  "I am more correct than
         | this other person _because_ I am a rationalist " is the moment
         | you fail as a rationalist.
         | 
         | It is an incredibly easy mistake to make. To make effective use
         | of the tools, you need to become more humble than before you
         | were using them or you just turn into an asshole who can't be
         | reasoned with.
         | 
         | If you're saying "well actually, I'm right" more often than "oh
         | wow, maybe I'm wrong", you've failed as a rationalist.
        
           | the_af wrote:
           | > _The moment you think something like "I am more correct
           | than this other person because I am a rationalist" is the
           | moment you fail as a rationalist._
           | 
           | It's very telling that some of them went full "false modesty"
           | by naming sites like "LessWrong", when you just know they
           | actually mean "MoreRight".
           | 
           | And in reality, it's just a bunch of "grown teenagers"
           | posting their pet theories online and thinking themselves
           | "big thinkers".
        
             | mariusor wrote:
             | > you just know they actually mean "MoreRight".
             | 
             | I'm not affiliated with the rationalist community, but I
             | always interpreted "Less Wrong" as word-play on how "being
             | right" is an absolute binary: you can either be right, or
             | not be right, while "being wrong" can cover a very large
             | gradient.
             | 
             | I expect the community wanted to emphasize how people
             | employing the specific kind of Bayesian iterative reasoning
             | they were proselytizing would arrive at slightly lesser
             | degrees of wrong than the other kinds that "normal" people
             | would use.
             | 
             | If I'm right, your assertion wouldn't be totally
             | inaccurate, but I think it might be missing the actual
             | point.
        
               | the_af wrote:
               | > _I 'm not affiliated with the rationalist community,
               | but I always interpreted "Less Wrong" as word-play on how
               | "being right" is an absolute binary: you can either be
               | right, nor not be right, while "being wrong" can cover a
               | very large gradient._
               | 
               | I know that's what they mean at the surface level, but
               | you just know it comes with a high degree of smugness and
               | false modesty. "I only know that I know nothing" --
               | maybe, but they ain't no modern day Socrates, they are
               | just a bunch of nerds going online with their thoughts.
        
               | Matticus_Rex wrote:
               | So much projection.
        
               | the_af wrote:
               | I don't think I'm more clever than the average person,
               | nor have I made this my identity or created a whole tribe
               | around it, nor do I attend nor host conferences around my
               | cleverness, rationality, or weird sexual fetishes.
               | 
               | In other words: no.
        
               | mariusor wrote:
               | Sometimes people enjoy being clever not because they want
               | to rub it in your face that you're not, but because it's
               | fun. I usually try not to take it personally when I don't
               | get the joke and strive to do better next time.
        
               | the_af wrote:
               | That's mildly insulting of you.
               | 
               | I do get the joke; I think it's an instance of their
               | feelings of "rational" superiority.
               | 
               | Assuming the other person didn't get the joke is very...
               | _irrational_ of you.
        
               | zahlman wrote:
               | >but you just know it comes with a high degree of
               | smugness and false modesty
               | 
               | No; I know no such thing, as I have no good reason to
               | believe it, and plenty of countering evidence.
        
               | astrange wrote:
               | Very rational of you, but that's the problem with the
               | whole system.
               | 
               | If you want to avoid thinking you're right all the time,
               | it doesn't help to be clever and say the logical
               | opposite. "Rationally" it should work, but it's bad
               | because you're still thinking about it! It's like the
               | thinking of a pink elephant thing.
               | 
               | Other approaches I recommend:
               | 
               | * try and fail to invest in stocks
               | 
               | * read Meaningness's https://metarationality.com
               | 
               | * print out this meme and put it on your wall
               | https://imgflip.com/i/82h43h
        
               | mananaysiempre wrote:
               | > I always interpreted "Less Wrong" as word-play on how
               | "being right" is an absolute binary
               | 
               | Specifically (AFAIK) a reference to Asimov's
               | description[1] of the idea:
               | 
               | > [W]hen people thought the earth was flat, they were
               | wrong. When people thought the earth was spherical, they
               | were wrong. But if _you_ think that thinking the earth is
               | spherical is _just as wrong_ as thinking the earth is
               | flat, then your view is wronger than both of them put
               | together.
               | 
               | [1] https://skepticalinquirer.org/1989/10/the-relativity-
               | of-wron...
        
               | mariusor wrote:
               | Cool, I didn't know the quote, nor that it was
               | inspiration for the name. Thank you.
        
           | wizzwizz4 wrote:
           | Chapter 67. https://www.readthesequences.com/Knowing-About-
           | Biases-Can-Hu... (And since it's in the book, and people know
           | about it, _obviously_ they 're not doing it themselves.)
        
             | FeepingCreature wrote:
             | Also the Valley of Bad Rationality tag.
             | https://www.lesswrong.com/w/valley-of-bad-rationality
        
             | TeMPOraL wrote:
             | Also that the Art needs to be about something else than
             | itself, and a dozen different things. This failure mode is
             | well known in the community; Eliezer wrote about it to
             | death, and so did others.
        
           | wannabebarista wrote:
           | This reminds me of undergrad philosophy courses. After the
           | intro logic/critical thinking course, some students can't
           | resist seeing affirming the antecedent and post hoc fallacies
           | everywhere (even if more are imagined than not).
        
           | zahlman wrote:
           | > The moment you think something like "I am more correct than
           | this other person because I am a rationalist" is the moment
           | you fail as a rationalist.
           | 
           | Well said. Rationalism is about doing rationalism, not about
           | being a rationalist.
           | 
           | Paul Graham was on the right track about that, though
           | seemingly for different reasons (referring to "Keep Your
           | Identity Small").
           | 
           | > If you're saying "well actually, I'm right" more often than
           | "oh wow, maybe I'm wrong", you've failed as a rationalist.
           | 
           | On the other hand, success is supposed to look exactly like
           | _actually being_ right more often.
        
             | Bjartr wrote:
             | > success is supposed to look exactly like actually being
             | right more often.
             | 
             | I agree with this, and I don't think it's at odds with what
             | I said. The point is to never stop sincerely believing you
             | could be wrong. That you are right more often is exactly
             | why it's such an easy trap to fall into. The tools of
             | rationality only help as long as you are actively applying
             | them, which requires a certain amount of humility, even in
             | the face of success.
        
         | hiAndrewQuinn wrote:
         | If you're in it just to figure out the core argument for why
         | artificial intelligence is dangerous, please consider reading
         | the first few chapters of Nick Bostom's _Superintelligence_
         | instead. You 'll get a lot more bang for your buck that way.
        
         | quickthrowman wrote:
         | Your time would probably be better spent reading his magnum
         | opus, _Harry Potter and the Methods of Rationality_.
         | 
         | https://hpmor.com/
        
         | turtletontine wrote:
         | For example, I recall being in lot of arguments that are purely
         | "semantical" in nature.
         | 
         | I believe this is what Wittgenstein called "language games"
        
       | d--b wrote:
       | Sorry, I haven't followed what is it that these guys call
       | Rationalism?
        
         | pja wrote:
         | https://en.wikipedia.org/wiki/Rationalist_community
         | 
         | Fair warning: when you turn over some of the rocks here you
         | find squirming, slithering things that should not be given
         | access to the light.
        
           | d--b wrote:
           | thanks much
        
           | nosrepa wrote:
           | And Harry Potter fan fiction.
        
       | retRen87 wrote:
       | He already had a rationalist "coming out" like ages ago. Dude
       | just make up your mind
       | 
       | https://scottaaronson.blog/?p=2537
        
         | kragen wrote:
         | While this was an interesting and enjoyable read, it doesn't
         | seem to be a "rationalist 'coming out'". On the contrary, he's
         | just saying he would have liked going to a 'rationalist'
         | meeting.
        
           | retRen87 wrote:
           | The last paragraph discusses how he's resisted the label and
           | then he closes with "the rationalists have walked the walk
           | and rationaled the rational, and thus they've given me no
           | choice but to stand up and be counted as one of them."
           | 
           | He's clearly identifying as a rationalist there
        
             | kragen wrote:
             | Oh, you're right! I'd add that it's actually the
             | penultimate paragraph of the first of two postscripts
             | appended to the post. I should have read those, and I
             | appreciate the correction.
        
       | resource_waste wrote:
       | "I'm a Rationalist"
       | 
       | "Here are some labels I identify as"
       | 
       | So they arent rational enough to understand first principles
       | don't objectively exist.
       | 
       | They were corrupted by words of old men, and have built a
       | foundation of understanding on them. This isnt rationality, but
       | rather Reason based.
       | 
       | I consider Instrumentalism and Bayesian epistemology to be the
       | best we can get towards knowledge.
       | 
       | I'm going to be a bit blunt and not humble at all, this person is
       | a philosophical inferior to myself. Their confidence is hubris.
       | They haven't discovered epistemology. There isnt enough
       | skepticism in their claims. They use black and white labels and
       | black and white claims. I remember when I was confident like the
       | author, but a few empirical pieces of evidence made me realize I
       | was wrong.
       | 
       | "it is a habit of mankind to entrust to careless hope what they
       | long for, and to use sovereign reason to thrust aside what they
       | do not fancy."
        
       | bargainbin wrote:
       | Never ceases to amaze me that the people who are clever enough to
       | always be right are never clever enough to see how they look like
       | complete wankers when telling everyone how they're always right.
        
         | falcor84 wrote:
         | I don't see how that's any more "wanker" then this famous
         | saying by Socrates's; Western thought is wankers all the way
         | down.
         | 
         | > Although I do not suppose that either of us knows anything
         | really beautiful and good, I am better off than he is - for he
         | knows nothing, and thinks he knows. I neither know nor think I
         | know.
        
         | cogman10 wrote:
         | > clever enough to always be right
         | 
         | Oh, see here's the secret. Lots of people THINK they are always
         | right. Nobody is.
         | 
         | The problem is you can read a lot of books, study a lot of
         | philosophy, practice a lot of debate. None of that will cause
         | you to be right when you are wrong. It will, however, make it
         | easier for you to sell your wrong position to others. It also
         | makes it easier for you to fool yourself and others into
         | believing you're uniquely clever.
        
         | KolibriFly wrote:
         | Sometimes the meta-skill of how you come across while being
         | right is just as important as the correctness itself...
        
         | gadders wrote:
         | It's a coping mechanism for autists, mainly.
        
         | lowbloodsugar wrote:
         | "I don't like how they said it" and "I don't like how this made
         | me feel" is the aspect of the human brain that has given us
         | Trump. As long as the idea that "how you feel about it" is a
         | basis for any decision making, the world will continue to be
         | fucked. The authors audience largely understand that "this made
         | me feel" is an indication that introspection is required, and
         | not an indication that the author should be ignored.
        
       | dr_dshiv wrote:
       | Since intuitive and non-rational thinking are demonstrably
       | rational in the face of incomplete information, I guess we're all
       | rationalists. Or that's how I'm rationalizing it, anyway.
        
       | aosaigh wrote:
       | > "You're Scott Aaronson?! The quantum physicist who's always
       | getting into arguments on the Internet, and who's essentially
       | always right, but who sustains an unreasonable amount of psychic
       | damage in the process?"
       | 
       | Give me strength. So much hubris with these guys (and they're
       | almost always guys).
       | 
       | I would have assumed that a rationalist would look for truth and
       | not correctness.
       | 
       | Oh wait, it's all just a smokescreen for know-it-alls to show you
       | how smart they are.
        
         | api wrote:
         | That's exactly what Rationalism(tm) is.
         | 
         | The basic trope is showing off how smart you are and what I
         | like to call "intellectual edgelording." The latter is
         | basically a fetish for contrarianism. The big flex is to take a
         | very contrarian position -- according to what one imagines is
         | the prevailing view -- and then defend it in the most creative
         | way possible.
         | 
         | Intellectual edgelording gives us shit like neoreaction
         | ("monarchy is good actually" -- what a contrarian flex!),
         | timeless decision theory, and wild-ass shit like the Zizians,
         | effective altruists thinking running a crypto scam is the best
         | path to maximizing their utility, etc.
         | 
         | Whether an idea is contrarian or not is unrelated to whether
         | it's a good idea or not. I think the fetish for contrarianism
         | might have started with VCs playing public intellectual, since
         | as a VC you make the big bucks when you make a contrarian bet
         | that pays off. But I think this is an out-of-context
         | misapplication of a lesson from investing to the sphere of
         | scientific and philosophical truth. Believing a lot of shitty
         | ideas in the hopes of finding gems is a good way to drive
         | yourself bonkers. "So I believe in the flat Earth, vaccines
         | cause autism, and loop quantum gravity, so I figure one big win
         | this portfolio makes me a genius!"
         | 
         | Then there's the cults. I think this stuff is to Silicon Valley
         | and tech what Scientology is to Hollywood and the film and
         | music industries.
        
           | cshimmin wrote:
           | Thank you for finally making this make sense to me.
        
             | api wrote:
             | Another thing that's endemic in Rationalism is a kind of
             | specialized variety of the Gish gallop.
             | 
             | It goes like this:
             | 
             | (1) Assert a set of priors (with emphasis on the word
             | _assert_ ).
             | 
             | (2) Reason from those priors to some conclusion.
             | 
             | (3) Seamlessly, without skipping a beat, take that solution
             | as _valid_ because the reasoning appears consistent and
             | make that part of a new set of priors.
             | 
             | (4) Repeat, or rather _recurse_ since the new set of priors
             | is built on previous iterations.
             | 
             | The entire concept of science is founded on the idea that
             | you can't do that. You have to stop and touch grass, which
             | in science means making observations or doing experiments
             | if possible. You have to see if the conclusion you reached
             | actually matches reality in any meaningful way. That's
             | because reason alone is fragile. As any programmer knows, a
             | single error or a single mistaken prior propagates and
             | renders the entire tree invalid. Do this recursively and
             | one error anywhere in this crystalline structure means
             | you've built a gigantic tower of bullshit.
             | 
             | I compare it to the Gish gallop because of how
             | enthusiastically they do it, and how by doing it so fast it
             | becomes hard to try to argue against. You end up having to
             | try to counter a firehose of Oh So Very Smart complicated
             | exquisitely reasoned nonsense.
             | 
             | Or you can just, you know, conclude that this entire method
             | of determining truth is invalid and throw the entire thing
             | in the trash.
             | 
             | A good "razor" for this kind of thing is to judge it by its
             | fruit. So far the fruit is AI hysteria, cults like the
             | Zizians, neoreactionary political ideology, Sam Bankman
             | Fried, etc. Has anything good or useful come from any of
             | this?
        
         | ModernMech wrote:
         | Rationalists are better called Rationalizationists, really.
        
       | NoGravitas wrote:
       | Probably the most useful book ever written about topics adjacent
       | to capital-R Rationalism is "Neoreaction, A Basilisk: Essays on
       | and Around the Alt-Right" [1], by Elizabeth Sandifer. Though the
       | topic of the book is nominally the Alt-Right, a lot more of it is
       | about the capital-R Rationalist communities and individuals that
       | incubated the neoreactionary movement that is currently dominant
       | in US politics. It's probably the best book to read for
       | understanding how we got politically and intellectually from
       | where we were in 2010, to where we are now.
       | 
       | https://www.goodreads.com/book/show/41198053-neoreaction-a-b...
        
         | kragen wrote:
         | Thanks for the recommendation! I hadn't heard about the book.
        
         | mananaysiempre wrote:
         | That book, IMO, reads very much like a smear attempt, and not
         | one done with a good understanding of the target.
         | 
         | The premise, with an attempt to tie capital-R Rationalists to
         | the neoreactionaries though a sort of guilt by association, is
         | frankly weird: Scott Alexander is well-known among the former
         | to be essentially the only prominent figure that takes the
         | latter seriously--seriously enough, that is, to write a large
         | as-well-stated-as-possible survey[1] followed by a humongous
         | point-by-point refutation[2,3]; whereas the "cult leader" of
         | the rationalists, Yudkowsky, is on the record as despising
         | neoreactionaries to the point of refusing to discuss their
         | views. (As far as recent events, Alexander wrote a scathing
         | review of Yarvin's involvement in Trumpist politics[4] whose
         | main thrust is that Yarvin has betrayed basically everything he
         | advocated for.)
         | 
         | The story of the book's conception also severely strains an
         | assumption of good faith[5]: the author, Elizabeth Sandifer,
         | explicitly says it was to a large extent inspired, sourced, and
         | edited by David Gerard, a prominent contributor to RationalWiki
         | and r/SneerClub (the "sneerers" mentioned in TFA) and Wikipedia
         | administrator who after years of edit-warring got topic-banned
         | from editing articles about Scott Alexander (Scott Siskind) for
         | conflict of interest and defamation[6] (including adding links
         | to the book as a source for statements on Wikipedia about links
         | between rationalists and neoreaction). Elizabeth Sandifer
         | herself got banned for doxxing a Wikipedia editor during
         | Gerard's earlier edit war at the time of Manning's gender
         | transition, for which Gerard was also sanctioned[7].
         | 
         | [1] https://slatestarcodex.com/2013/03/03/reactionary-
         | philosophy...
         | 
         | [2] https://slatestarcodex.com/2013/10/20/the-anti-
         | reactionary-f...
         | 
         | [3] https://slatestarcodex.com/2013/10/24/some-preliminary-
         | respo...
         | 
         | [4] https://www.astralcodexten.com/p/moldbug-sold-out
         | 
         | [5] https://www.tracingwoodgrains.com/p/reliable-sources-how-
         | wik...
         | 
         | [6]
         | https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_no...
         | 
         | [7]
         | https://en.wikipedia.org/wiki/Wikipedia:Arbitration/Requests...
        
           | Aurornis wrote:
           | I always find it interesting that when the topic of
           | rationalists' fixation on neoreactionary topics comes into
           | question, the primary defenses are that it's important to
           | look at controversial ideas and that we shouldn't dismiss
           | novel ideas because we don't like the group sharing them.
           | 
           | Yet as soon as the topic turns to criticisms of the
           | rationalist community, we're supposed to ignore those ideas
           | and instead fixate on the messenger, ignore their arguments,
           | and focus on ad-hominem attacks that reduce their
           | credibility.
           | 
           | It's no secret that Scott Alexander had a bit of a fixation
           | on neoreactionary content for years. The leaked e-mails
           | showed he believed there to be "gold" in some of their ideas
           | and he enjoyed the extra traffic it brought to his blog. I
           | know the rationalist community has been working hard to
           | distance themselves from that era publicly, but dismissing
           | that chapter of the history because it feels too much like a
           | "smear" or because we're not supposed to like the author
           | feels extremely hypocritical given the context.
        
         | FeepingCreature wrote:
         | If you want a book on the rationalists that's not a smear
         | dictated by a person who is banned from their Wikipedia page
         | for massive npov violations, I hear Chivers' _The AI Does Not
         | Hate You_ and _Rationalist 's Guide to the Galaxy_ are good.
         | 
         | (Disclaimer: Chivers kinda likes us, so if you like one book
         | you'll probably dislike the other.)
        
         | Matticus_Rex wrote:
         | > Probably the most useful book
         | 
         | You mean "probably the book that confirms my biases the most"
        
         | kurtis_reed wrote:
         | The "neoreactionary movement" is definitely not dominant
        
         | zahlman wrote:
         | > incubated the neoreactionary movement that is currently
         | dominant in US politics
         | 
         | > Please don't use Hacker News for political or ideological
         | battle. It tramples curiosity.
         | 
         | You are presenting a highly contentious worldview for the sake
         | of smearing an outgroup. Please don't. Further, the smear
         | relies on guilt by association that many (including myself)
         | would consider invalid on principle, and which further doesn't
         | even bear out on cursory examination.
         | 
         | At least take a moment to see how others view the issue.
         | "Reliable Sources: How Wikipedia Admin David Gerard Launders
         | His Grudges Into the Public Record"
         | https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...
         | includes lengthy commentary on Sandifer (a close associate of
         | Gerard)'s involvement with rationalism, and specifically on the
         | work you cite and its biases.
        
         | Aurornis wrote:
         | Ironically, bringing this topic up always turns the
         | conversation to ad-hominem attacks about the messenger while
         | completely ignoring the subject matter. That's exactly the type
         | of argument rationalists claim to despise, but it gets brought
         | up whenever inconvenient arguments appear about their own
         | communities. All of the comments dismissing the _content_
         | because of the author or refusing to acknowledge the arguments
         | because it feels like a  "smear" are admitting their inability
         | to judge an argument on their own merits.
         | 
         | If anyone wants to actually engage with the topic instead of
         | trying to ad-hominem it away, I suggest at least reading Scott
         | Alexander's own words on why he so frequently engages in
         | neoreactionary topics:
         | https://www.reddit.com/r/SneerClub/comments/lm36nk/comment/g...
         | 
         | Some select quotes:
         | 
         | > First is a purely selfish reason - my blog gets about 5x more
         | hits and new followers when I write about Reaction or gender
         | than it does when I write about anything else, and writing
         | about gender is horrible. Blog followers are useful to me
         | because they expand my ability to spread important ideas and
         | network with important people.
         | 
         | > Third is that I want to spread the good parts of Reactionary
         | thought
         | 
         | > Despite considering myself pretty smart and clueful, I
         | constantly learn new and important things (like the crime
         | stuff, or the WWII history, or the HBD) from the Reactionaries.
         | Anything that gives you a constant stream of very important new
         | insights is something you grab as tight as you can and never
         | let go of.
         | 
         | In this case, HBD means "human biodiversity" which is the alt-
         | right's preferred term for racialism, or the division of humans
         | into races with special attention to the relative intelligence
         | of those different races. This is an oddly recurring theme on
         | Scott Alexander's work. He even wrote a coded blog post to his
         | followers about how he was going to deny it publicly while
         | privately holding it to be very correct.
        
       | apples_oranges wrote:
       | Never heard of the man, but that was a fun read. And it looks
       | like a fun club to be part of. Until in becomes unbearable
       | perhaps. Also raises the chances to get invited to birthday
       | orgies..? Perhaps I should have stayed a in academia..
        
         | moolcool wrote:
         | > Until in becomes unbearable perhaps
         | 
         | Until?
        
       | Barrin92 wrote:
       | _> "frankly, that they gave off some (not all) of the vibes of a
       | cult, with Eliezer as guru. Eliezer writes in parables and koans.
       | He teaches that the fate of life on earth hangs in the balance,
       | that the select few who understand the stakes have the terrible
       | burden of steering the future_"
       | 
       | One of the funniest and most accurate turns of phrases in my mind
       | is Charles Stross' characterization of rationalists as "duck
       | typed Evangelicals". I've come to the conclusion that American
       | atheists just don't exist, in particular Californians. Five
       | minutes after they leave organized religion they're in a techno
       | cult that fuses chosen people myths, their version of the Book of
       | Revelation, gnosticism and what have you.
       | 
       | I used to work abroad in Shenzhen for a few years and despite
       | meeting countless of people as interested in and obsessed with
       | technology, if not more than the people mentioned in this
       | blogpost, there's just no corellary to this. There's no
       | millenarian obsession over machines taking over the world,
       | bizarre trust in rationalism or cult like compounds full of
       | socially isolated new age prophets.
        
       | bee_rider wrote:
       | The main things I don't like about rationalism are aesthetic (the
       | name sucks and misusing the language of Bayesian probability is
       | annoying). Sounds like they are a thoughtful and nice bunch
       | otherwise(?).
        
       | mathattack wrote:
       | Logic is an awesome tool that took us from Greek philosophers to
       | the gates on our computers. The challenge with pure rationalism
       | is checking the first principles that the thinking comes from.
       | Logic can lead you astray if the principles are wrong, or you
       | miss the complexity along the way.
       | 
       | On the missing first principles, look at Aristotle. One of the
       | history's greatest logicians came to many false conclusions.
       | 
       | On missing complexity, note that Natural Selection came from
       | empirical analysis rather than first principles thinking. (It
       | could have come from the latter, but was too complex) [1]
       | 
       | This doesn't discount logic, it just highlights that answers
       | should always come with provisional humility.
       | 
       | And I'm still a superfan of Scott Aaronson.
       | 
       | [0] https://www.wired.com/story/aristotle-was-wrong-very-
       | wrong-b...
       | 
       | [1] https://www.jstor.org/stable/2400494
        
         | jrm4 wrote:
         | Yup, can't stress the word "tool" enough.
         | 
         | It's a "tool," it's a not a "magic window into absolute truth."
         | 
         | Tools can be good for a job, or bad. Carry on.
        
           | jrm4 wrote:
           | looks like I riled up the Rationalists, huh
        
         | kragen wrote:
         | The 'rationalist' group being discussed here aren't Cartesian
         | rationalists, who dismissed empiricism; rather, they're
         | Bayesian empiricists. Bayesian probability turns out to be
         | precisely the unique extension of Boolean logic to continuous
         | real probability that Aristotle (nominally an empiricist!) was
         | lacking. (I think they call themselves "rationalists" because
         | of the ideal of a "rational Bayesian agent" in economics.)
         | 
         | However, they have a slogan, "One does not simply _reason_ over
         | the joint conditional probability distribution of the
         | universe." Which is to say, AIXI is uncomputable, and even AIXI
         | can only reason over computable probability distributions!
        
           | 1propionyl wrote:
           | They can call themselves empiricists all they like, it only
           | takes a few exposures to their number to come away with a
           | firm conviction (or, let's say, updated prior?) that _they
           | are not_.
           | 
           | First-principles reasoning and the selection of convenient
           | priors are consistently preferenced over the slow, grinding
           | work of iterative empiricism and the humility to commit to
           | observation before making overly broad theoretical claims.
           | 
           | The former let you seem right about something right now. The
           | latter more often than not lead you to discover you are wrong
           | (in interesting ways) much later on.
        
         | eth0up wrote:
         | >provisional humility.
         | 
         | I hope this becomes the first ever meme with some value. We
         | need a cult... of Provisional Humility.
         | 
         | Must. Increase. The. pH
        
           | zahlman wrote:
           | > Must. Increase. The. pH
           | 
           | Those who do so would be... based?
        
             | eth0up wrote:
             | _Basically._
             | 
             | The level of humility in most subjects is low enough to
             | consume glass. We would all benefit from practicing it more
             | arduously.
             | 
             | I was merely adding support to what I thought was fine
             | advice. And it is.
        
         | edwardbernays wrote:
         | Logic is the study of what is true, and also what is provable.
         | 
         | In the most ideal circumstances, these are the same. Logic has
         | been decomposed into model theory (the study of what is true)
         | and proof theory (the study of what is provable). So much of
         | modern day rationalism is unmoored proof theory. Many of them
         | would do well to read Kant's "The Critique of Pure Reason."
         | 
         | Unfortunately, in the very complex systems we often deal with,
         | what is true may not be provable and many things which are
         | provable may not be true. This is why it's equally as important
         | to hone your skills of discernment, and practice reckoning as
         | well as reasoning. I think of it as hearing "a ring of truth,"
         | but this is obviously unfalsifiable and I must remain skeptical
         | against myself when I believe I hear this. It should be a guide
         | toward deeper investigation, not the final destination.
         | 
         | Many people are led astray by thinking. It is seductive. It
         | should be more commonly said that thinking is but a conscious
         | stumbling block on the way to unconscious perfection.
        
       | jrm4 wrote:
       | My eyes started to glaze over after a bit; so what I'm getting
       | here is there a group that calls themselves "Rationalists," but
       | in just about every externally meaningful sense, they're smelling
       | like -- perhaps not a cult, but certainly a lot of weird
       | insider/outsider talk that feels far from rational?
        
         | pja wrote:
         | Capital r-Rationalism definitely bleeds into cult-like
         | behaviour, even if they haven't necessarily realised that
         | they're radicalising themselves.
         | 
         | They've already had a splinter rationalist group go full cult,
         | right up to & including the consequent murders & shoot-out with
         | the cops flameout: https://en.wikipedia.org/wiki/Zizians
        
       | pja wrote:
       | Scott Aaronson, the man who turned scrupulosity into a weapon
       | against his own psyche is a capital R rationalist?
       | 
       | Yeah, this surprises absolutely nobody.
        
         | great_tankard wrote:
         | "YOUR ATTENTION PLEASE: I have now joined the club everyone
         | assumed I was already a member of."
        
           | mitthrowaway2 wrote:
           | It's his personal blog, the only people whose attention he's
           | asking for are the people choosing to wander over there to
           | see what he's up to.
           | 
           | Not his fault that people deemed it interesting enough to
           | upvote to the front page of HN.
        
       | djoldman wrote:
       | Just to confirm, this is about:
       | 
       | https://en.wikipedia.org/wiki/Rationalist_community
       | 
       | and not:
       | 
       | https://en.wikipedia.org/wiki/Rationalism
       | 
       | right?
        
         | thomasjudge wrote:
         | Along these lines I am sort of skimming articles/blogs/websites
         | about Lightcone, LessWrong, etc, and I am still struggling with
         | the question...what do they DO?
        
           | Mond_ wrote:
           | Look, it's just an internet community of people who write
           | blog posts and discuss their interests on web forums.
           | 
           | Asking "What do they do?" is like asking "What do
           | Hackernewsers do?"
           | 
           | It's not exactly a coherent question. Rationalists are a
           | somewhat tighter group, but in the end the point stands. They
           | write and discuss their common interests, e.g. the progress
           | of AI, psychiatry stuff, bayesianism, thought experiments,
           | etc.
        
           | FeepingCreature wrote:
           | Twenty years or so ago, Eliezer Yudkowsky, a former proto-
           | accelerationalist, realized that superintelligence was
           | probably coming, was _deeply unsafe,_ and that we should do
           | something about that. Because he had a very hard time
           | convincing people of this to him obvious fact, he first wrote
           | a very good blog about human reason, philosophy and AI, in
           | order to fix whatever was going wrong in people 's heads that
           | caused them to not understand that superintelligence was
           | coming and so on. The group of people who read, commented on
           | and contributed to this blog are called the rationalists.
           | 
           | (You're hearing about them now because these days it looks a
           | lot more plausible than in 2007 that Eliezer was right about
           | superintelligence, so the group of people who've beat the
           | drum about this for over a decade now form the natural nexus
           | around which the current iteration of project "we should do
           | something about unsafe superintelligence" is congealing.)
        
             | astrange wrote:
             | > hat superintelligence was probably coming, was deeply
             | unsafe
             | 
             | Well, he was right about that. Pretty much all the details
             | were wrong, but you can't expect that much so it's fine.
             | 
             | The problem is that it's philosophically confused. Many
             | things are "deeply unsafe", the main example being driving
             | or being anywhere near someone driving a car. And yet it
             | turns out to matter a lot less, and matter in different
             | ways, than you'd expect if you just thought about it.
             | 
             | Also see those signs everywhere in California telling you
             | that everything gives you cancer. It's true, but they
             | should be reminding you to wear sunscreen.
        
           | kurtis_reed wrote:
           | Hang out and talk
        
         | FeepingCreature wrote:
         | Absolutely everybody names it wrong. The movement is called
         | rationality or "LessWrong-style rationality", explicitly to
         | differentiate it from rationalism the philosophy; rationality
         | is actually in the empirical tradition.
         | 
         | But the words are too close together, so this is about as lost
         | a battle as "hacker".
        
           | gjm11 wrote:
           | I don't think "rationality" is a good name for the movement,
           | for the same reason as I wish "effective altruism" had picked
           | a different name: it conflates the _goal_ with the
           | _achievement of the goal_. A rationalist (in the Yudkowsky
           | sense) is someone who is _trying to be rational_ , in a
           | particular way. But "rationality" means _actually being_
           | rational.
           | 
           | I don't think it's actually true that rationalists-in-this-
           | sense commonly use "rationality" to refer to _the movement_ ,
           | though they do often use it to refer to _what the movement is
           | trying to do_.
        
       | KolibriFly wrote:
       | It's encouraging to hear that behind all the internet noise, the
       | real-life community is thriving and full of people earnestly
       | trying to build a better future
        
       | norir wrote:
       | One of my many problems with rationalism is that it generally
       | fails to acknowledge it's fundamentally religious character while
       | pronouncing itself superior to all other religions.
        
       | throw7 wrote:
       | I used to snicker at these guys, but I realized I'm not being
       | humble or to be more theologically minded: gracious.
       | 
       | Recognizing we all take a step of faith to move outside of
       | solipsism into a relationship with others should humble us.
        
       | os2warpman wrote:
       | Rationalists as a movement remind me of the individuals who claim
       | to be serious about history but are only interested in a very,
       | VERY specific set of six years in one very specific part of the
       | world.
       | 
       | And boy are they extremely interested in ONLY those six years.
        
       | Mikhail_K wrote:
       | "Rationalists," the "objectivists" rebranded?
        
         | lanfeust6 wrote:
         | Political affiliation distribution is similar to the general
         | population.
        
       | mkoubaa wrote:
       | The problem with rationalism is we don't have language to express
       | our thoughts formally enough nor a compiler to transform that
       | language into something runnable (platonic AST) nor a machine
       | capable of emulating reality.
       | 
       | Expecting rational thought to correspond to reality is like
       | expecting a 6 million line program written in a hypothetical
       | programming language invented in the 1700s to run bug free on a
       | turing machine.
       | 
       | Tooling matters.
        
       | IlikeKitties wrote:
       | I once was interested in a woman who was really into the
       | effective altruism/rationalism crowd. I went to a few meetings
       | with her but my inner contrarian didn't like it.
       | 
       | Took me a few years to realize how cultish it all felt and that I
       | am somewhat happy my edgy atheist contrarian personality
       | overwrote my dicks thinking with that crowd.
        
       | jrd259 wrote:
       | I'm so out of the loop. What is the new, special sense of
       | Rationalist over what it might have meant to e.g. Descarte?
        
       | musha68k wrote:
       | Very Bay Area to assume you invented Bayesian thinking.
        
       | nathcd wrote:
       | Some of the comments here remind me of online commentary about
       | some place called "the orange site". Always wondered who they
       | were talking about...
        
         | mitthrowaway2 wrote:
         | Can't stand that place. Those people are all so sure that
         | they're right about everything.
        
       | anonnon wrote:
       | Does that mean he read the Harry Potter fanfic?
        
       | bikamonki wrote:
       | https://en.wikipedia.org/wiki/Rationalist_community
       | 
       | "In particular, several women in the community have made
       | allegations of sexual misconduct, including abuse and harassment,
       | which they describe as pervasive and condoned."
       | 
       | There's weird sex stuff, logically, it's a cult.
        
         | Matticus_Rex wrote:
         | Most weird sex stuff takes place outside of cults, so that
         | doesn't follow.
        
       | tptacek wrote:
       | Well that was a whole thing. I especially liked the existential
       | threat of Cade Metz. But ultimately, I think the great oracle of
       | Chicago got this whole thing right when he said:
       | 
       |  _-Ism 's in my opinion are not good. A person should not believe
       | in an -ism, he should believe in himself. I quote John Lennon, "I
       | don't believe in Beatles, I just believe in me." Good point
       | there. After all, he was the walrus. I could be the walrus. I'd
       | still have to bum rides off people._
        
         | dragonwriter wrote:
         | > Ism's in my opinion are not good. A person should not believe
         | in an -ism, he should believe in himself
         | 
         | There's an -ism for that.
         | 
         | Actually, a few different ones depending on the exact angle you
         | look at the it from: solipsism, narcissism,...
        
           | astrange wrote:
           | > There's an -ism for that
           | 
           | It's Buddhism.
           | 
           | https://en.wikipedia.org/wiki/Anatta
           | 
           | > Actually, a few different ones depending on the exact angle
           | you look at the it from: solipsism, narcissism,...
           | 
           | That is indeed a problem with it. The Buddhist solution is to
           | make you promise not to do that.
           | 
           | https://en.wikipedia.org/wiki/Bodhicitta
           | 
           | And the (well, a) term for the entire problem is "non-dual
           | awareness".
        
         | Aurornis wrote:
         | > I especially liked the existential threat of Cade Metz.
         | 
         | I am perpetually fascinated by the way rationalists love to
         | dismiss critics by pointing out that they met some people in
         | person and they seemed nice.
         | 
         | It's such a bizarre meme.
         | 
         | Curtis Yarvin went to one of the "Vibecamp" rationalist
         | gatherings, was nice to some prominent Twitter rationalists,
         | and now they are ardent defenders of him on Twitter. Their
         | entire argument is "I met him and he was nice".
         | 
         | It's mind boggling that the rationalist part of their
         | philosophy goes out the window as soon as the lines are drawn
         | between in-group and out-group.
         | 
         | Bringing up Cade Metz is a perennial favorite signal because of
         | how effectively they turned it into a "you're either with us or
         | against us" battle, completely ignoring any valid arguments
         | Cade Metz may have been brought to the table. Then you look at
         | how they treat Neoreactionaries and how we're supposed to look
         | past our disdain for them and focus on the possible good things
         | in their arguments, and you realize maybe this entire movement
         | isn't really about truth-seeking as much as they think it is.
        
       | danans wrote:
       | > A third reason I didn't identify with the Rationalists was,
       | frankly, that they gave off some (not all) of the vibes of a
       | cult, with Eliezer as guru.
       | 
       | Apart from a charismatic leader, a cult (in the colloquial
       | meaning) needs a business model, and very often, a sense of
       | separation from, and lack of accountability to those who are
       | outside the cult, which provides conveniently simpler environment
       | under which the cults ideas operate. A sort of "complexity
       | filter" at the entry gate.
       | 
       | I'm not sure how the Rationalists compare to those criteria, but
       | I'd be curious to find out.
        
       | scoofy wrote:
       | It's weird that "being interested in philosophy" is like... a
       | movement. My background is in philosophy, but the _rationalist_
       | vs _nonrationalist_ debate seems like an undergraduate class
       | dispute.
       | 
       | My old roommate worked for Open Phil, and was obsessed with AI
       | Safety and really into Bitcoin. I never was. We still had
       | interesting arguments about it all the time. Most of the time we
       | just argued until we got to the axioms we disagreed on, and that
       | was that.
       | 
       | You don't have to agree with the Rationalist(tm) perspective to
       | apply philosophically rigorous thinking. You can be friends and
       | allies with them without agreeing with all their views. There are
       | strong arguments for why frequentism may be more applicable than
       | bayesianism in different domains. Or why transhumanism is a pipe
       | dream. They are still conversations that are worthwhile as long
       | as you're not so confident in your position that you think you
       | might learn something.
        
       | lukas099 wrote:
       | This is vibe-based, but I think the Rationalists get more vitriol
       | than they deserve. Upon reflecting, my hypothesis for this is
       | threefold:
       | 
       | 1. They are a _community_ --they have an in-group, and if you are
       | not one of them you are by-definition in the out-group. People
       | tend not to like being in other peoples' out-groups.
       | 
       | 2. They have unusual opinions and are open about them. People
       | tend not to like people who express opinions different than their
       | own.
       | 
       | 3. They're nerds. Whatever has historically caused nerds to be
       | bullied/ostracized, they probably have.
        
         | teamonkey wrote:
         | > This is vibe-based
         | 
         | You mean _an empirical observation_
        
           | lowbloodsugar wrote:
           | Three examples of feelings-based conclusions were presented.
           | There is what is so, and how you feel about them. By all
           | means be empirical about what you felt, and maybe look into
           | that. "How this made me feel" describes the cause of how we
           | got the USA today.
        
         | johnfn wrote:
         | HN judges rationality quite severely. I mean, look at this
         | thread about Mr. Beast[1], who it's safe to say is a
         | controversial figure, and notice how all the top comments are
         | all pretty charitable. It's pretty funny to take the
         | conversation there and then compare the comments to this
         | article.
         | 
         | Scott Aaronson - in theory someone HN should be a huge fan of,
         | from all reports a super nice and extremely intelligent guy who
         | knows a staggering amount about quantum mechanics - says he
         | likes rationality, and gets less charity than Mr. Beast. Huh?
         | 
         | [1]: https://news.ycombinator.com/item?id=41549649
        
           | foldr wrote:
           | Most people are trying to be rational (to be sure, with
           | varying degrees of success), and people who aren't even
           | trying aren't really worth having abstract intellectual
           | discussions with. I'm reminded of CS Lewis's quip in a
           | different context that "you might just as well expect to be
           | congratulated because, whenever you do a sum, you try to get
           | it quite right."
        
             | throwaway314155 wrote:
             | Being rational and rationalist are not the same thing.
             | Funnily this sort of false equivalence that relies on being
             | "technically correct" is at the core of what makes
             | them...difficult.
        
         | Aurornis wrote:
         | > They are a community--they have an in-group, and if you are
         | not one of them you are by-definition in the out-group.
         | 
         | The rationalist community is most definitely not exclusive. You
         | can join it by declaring yourself to be a rationalist, posting
         | blogs with "epistemic status" taglines, and calling yourself a
         | rationalist.
         | 
         | The criticisms are not because it's a cool club that won't let
         | people in.
         | 
         | > They have unusual opinions and are open about them. People
         | tend not to like people who express opinions different than
         | their own.
         | 
         | Herein lies one of the problems with the rationalist community:
         | For all of their talk about heterodox ideas and entertaining
         | different viewpoints, they are remarkably lockstep in many of
         | their opinions.
         | 
         | From the outside, it's easy to see how one rationalist blogger
         | plants the seed of some topic and then it gets adopted by the
         | others as fact. A few years ago a rationalist blogger wrote a
         | long series postulating that trace lithium in water was causing
         | obesity. It even got an Astral Codex Ten monetary grant. For
         | years it got shared through the rationalist community as proof
         | of something, even though actual experts picked it apart from
         | the beginning and showed how the author was misinterpreting
         | studies, abusing statistics, and ignoring more prominent
         | factors.
         | 
         | The problem isn't differing opinions, the problem is that they
         | disregard actual expertise and try ham-fisted attempts at
         | "first principals" evaluations of a subject while ignoring
         | contradictory evidence and they do this very frequently.
        
           | gjm11 wrote:
           | Lockstep like this?
           | https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-
           | proba... (a post on Less Wrong, karma score currently +442,
           | versus +102 and +230 for the two posts it cites as earlier
           | favourable LW coverage of the lithium claim -- the comments
           | on both of which, by the way, don't look to me any more
           | positive than "skeptical but interested")
           | 
           | The followup post from the same author
           | https://www.lesswrong.com/posts/NRrbJJWnaSorrqvtZ/on-not-
           | get... is currently at a score of +306, again higher than
           | either of those other pro-lithium-hypothesis posts.
           | 
           | Or maybe this https://substack.com/home/post/p-39247037 (I
           | admit I don't know for sure whether the author considers
           | himself a rationalist, but I found the link via a search for
           | whether Scott Alexander had written anything about the
           | lithium theory, which it looks like he hasn't, which turned
           | this up in the subreddit dedicated to his writing).
           | 
           | Speaking of which, I can't find any sign that they got an ACX
           | grant. I _can_ find https://www.astralcodexten.com/p/acx-
           | grants-the-first-half which is basically "hey, here are some
           | interesting projects we didn't give any money to, with a one-
           | paragraph pitch from each" and one of the things there is
           | "Slime Mold Time Mold" talking about lithium; incidentally,
           | the comments there are also pretty skeptical.
           | 
           | So I'm not really seeing this "gets adopted by the others as
           | fact" thing in this case; it looks to me as if some people
           | proposed this hypothesis, some other people said "eh, doesn't
           | look right to me", and rationalists' attitude was mostly
           | "interesting idea but probably wrong". What am I missing
           | here?
        
             | Aurornis wrote:
             | > Lockstep like this?
             | https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-
             | proba... (a post on Less Wrong, karma score currently +442,
             | versus +102 and +230 for the two posts it cites as earlier
             | favourable LW coverage of the lithium claim -- the comments
             | on both of which, by the way, don't look to me any more
             | positive than "skeptical but interested")
             | 
             | That post came out a year later, in response to the
             | absurdity of the situation. The very introduction of that
             | post has multiple links showing how much the SMTM post was
             | spreading through the rationalist community with little
             | question.
             | 
             | One of the links is a Eliezer Yudkowsky blog praising the
             | work, which now includes an edited-in disclaimer at the top
             | about how he was mistaken:
             | https://www.lesswrong.com/posts/kjmpq33kHg7YpeRYW/briefly-
             | ra...
             | 
             | Pretending that this theory didn't grip the rationalist
             | community all the way to top bloggers like Yudkowsky and
             | Scott Alexander is revisionist history.
        
           | lukas099 wrote:
           | > The rationalist community is most definitely not exclusive.
           | 
           | I agree, and didn't intend to express otherwise. It's not an
           | exclusive community, but it _is_ a community, and if you aren
           | 't in it you are in the out-group.
           | 
           | > The problem isn't differing opinions, the problem is that
           | they disregard actual expertise and try ham-fisted attempts
           | at "first principals" evaluations of a subject while ignoring
           | contradictory evidence
           | 
           | I don't know if this is true or not, but if it is I don't
           | think it's why people scorn them. Maybe I don't give people
           | enough credit and you do, but I don't think most people care
           | how you arrived at an opinion; they merely care about whether
           | you're in their opinion-tribe or not.
        
             | const_cast wrote:
             | > Maybe I don't give people enough credit and you do, but I
             | don't think most people care how you arrived at an opinion;
             | they merely care about whether you're in their opinion-
             | tribe or not.
             | 
             | Yes, most people don't care how you arrived at an opinion,
             | they rather care about the practical impact of said
             | opinion. IMO this is largely a _good_ thing.
             | 
             | You can logically push yourself to just about any opinion,
             | even absolutely horrific ones. Everyone has implicit biases
             | and everyone is going to start at a different starting
             | point. The problem with string of logic for real-world
             | phenomena is that you HAVE to make assumptions. Like,
             | thousands of them. Because real-world phenomena are complex
             | and your model is simple. Which assumptions you choose to
             | make and in which directions are completely unknown, even
             | to you, the one making said assumptions.
             | 
             | Ultimately most people aren't going to sit here and try to
             | psychoanalyze why you made the assumptions you made and if
             | you were abused in childhood or deduce which country you
             | grew up in or whatever. It's too much work and it's
             | pointless - you yourself don't know, so how would we know?
             | 
             | So, instead, we just look at the end opinion. If it's
             | crazy, people are just going to call you crazy. Which I
             | think is fair.
        
       | bovermyer wrote:
       | I think I'm missing something important.
       | 
       | My understanding of "Rationalists" is that they're followers of
       | rationalism; that is, that truth can be understood only through
       | intellectual deduction, rather than sensory experience.
       | 
       | I'm wondering if this is a _different_ kind of "Rationalist." Can
       | someone explain?
        
         | kurtis_reed wrote:
         | It's a terrible name that collides with the older one you're
         | thinking of
        
         | FeteCommuniste wrote:
         | The easiest way to understand their usage of the term
         | "rational" might be to think of it as the negation of the term
         | "irrational" (where the latter refers mostly to cognitive
         | biases). Not as a contrast with "empirical."
        
       | stuaxo wrote:
       | Had to stop reading, everyone sounded so awful.
        
       | absurdo wrote:
       | What the fuck am I reading lmao.
        
       | cess11 wrote:
       | The narcissism in this movement is insufferable. I hope the
       | conditions for its existence will soon pass and give way to
       | something kinder and more learned.
        
       | lasersail wrote:
       | I was at Lighthaven that week. The weekend-long LessOnline event
       | Scott references opened what LightHaven termed "Festival Season",
       | with a summer camp organised for the following 5 week days, and a
       | prediction market & forecasting conference called Manifest the
       | following weekend.
       | 
       | I didn't attend LessOnline since I'm not active on LessWrong nor
       | identify as a rationalist - but I did attended a GPU programming
       | course in the "summer camp" portion of the week, and the Manifest
       | conference (my primary interest).
       | 
       | My experience generally aligns with Scott's view, the community
       | is friendly and welcoming, but I had one strange encounter. There
       | was some time allocated to meet with other attendees at Manifest
       | who resided in the same part of the world (not the bay area). I
       | ended up surrounded by a group of 5-6 folks who appeared to be
       | friends already, had been a part of the Rationalist movement for
       | a few years, and had attended LessOnline the previous weekend.
       | They spent most of the hour critiquing and comparing their
       | "quality of conversations" at LessOnline with the less
       | Rationalist-y, more prediction market & trading focused Manifest
       | event. Completely unaware or unwelcoming of my presence as an
       | outsider, they essentially came to the conclusion that a lot of
       | the Manifest crowd were dummies and were - on average - "more
       | wrong" than themselves. It was all very strange, cult-y, pseudo-
       | intellectual, and lacking in self-awareness.
       | 
       | All that said, the experience at Summer Camp and Manifest was a
       | net positive, but there is some credence to sneers aimed at the
       | Rationalist community.
        
       | PoignardAzur wrote:
       | As someone who likes both the Rationalist community and the Rust
       | community, it's fascinating to see the parallels in how the
       | Hacker News crowd treats both.
       | 
       | The contempt, the general lack of curiosity and the violence of
       | the bold sweeping statements people will make here are mind-
       | boggling.
        
         | cosmojg wrote:
         | Both the Rationalist community and the Rust community are very
         | active in pursuing their goals, and unfortunately, it's far
         | easier to criticize others for doing things than it is to
         | actually do things yourself. Worse yet, if you are not yourself
         | actively doing things, you are far more likely to experience
         | fear when other people are actively doing things as there is
         | always some nonzero chance that they will do things counter to
         | your own goals, forcing you to actively do something lest you
         | fall behind. Alas, people often respond to fear with hatred,
         | especially given the benefit of physical isolation and
         | dissociation from humanity offered by the Internet, and I think
         | that's what you're seeing here on Hacker News.
        
         | Aurornis wrote:
         | > the general lack of curiosity
         | 
         | Honestly, I find the Hacker News comments in recent years to be
         | most enlightening because so many comments come from people who
         | spent years immersed in rationalist communities.
         | 
         | For years one of my friend groups was deep into LessWrong and
         | SSC. I've read countless blog posts and other content out of
         | those groups.
         | 
         | Yet every time I write about it, I'm dismissed as an uninformed
         | outsider. It's an interesting group of people who like to
         | criticize and dissect other groups, but they don't take kindly
         | to anyone questioning their own circles.
        
       | protocolture wrote:
       | "I have come out as a smart good thinking person, who knew"
       | 
       | >liberal zionist
       | 
       | hmmmm
        
       ___________________________________________________________________
       (page generated 2025-06-19 23:00 UTC)