https://scottaaronson.blog/?p=8908 Shtetl-Optimized The Blog of Scott Aaronson If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel. Also, please read Zvi Mowshowitz's masterpiece on how to fix K-12 education! --------------------------------------------------------------------- << "If Anyone Builds It, Everyone Dies" Guess I'm A Rationalist Now A week ago I attended LessOnline, a rationalist blogging conference featuring many people I've known for years--Scott Alexander, Eliezer Yudkowsky, Zvi Mowshowitz, Sarah Constantin, Carl Feynman--as well as people I've known only online and was delighted to meet in person, like Joe Carlsmith and Jacob Falkovich and Daniel Reeves. The conference was at Lighthaven, a bewildering maze of passageways, meeting-rooms, sleeping quarters, gardens, and vines off Telegraph Avenue in Berkeley, which has recently emerged as the nerd Shangri-La, or Galt's Gulch, or Shire, or whatever. I did two events at this year's LessOnline: a conversation with Nate Soares about the Orthogonality Thesis, and an ask-me-anything session about quantum computing and theoretical computer science (no new ground there for regular consumers of my content). What I'll remember most from LessOnline is not the sessions, mine or others', but the unending conversation among hundreds of people all over the grounds, which took place in parallel with the sessions and before and after them, from morning till night (and through the night, apparently, though I've gotten too old for that). It felt like a single conversational archipelago, the largest in which I've ever taken part, and the conference's real point. (Attendees were exhorted, in the opening session, to skip as many sessions as possible in favor of intense small-group conversations--not only because it was better but also because the session rooms were too small.) Within the conversational blob, just making my way from one building to another could take hours. My mean free path was approximately five feet, before someone would notice my nametag and stop me with a question. Here was my favorite opener: "You're Scott Aaronson?! The quantum physicist who's always getting into arguments on the Internet, and who's essentially always right, but who sustains an unreasonable amount of psychic damage in the process?" "Yes," I replied, not bothering to correct the "physicist" part. One night, I walked up to Scott Alexander, who sitting on the ground, with his large bald head and a blanket he was using as a robe, resembled a monk. "Are you enjoying yourself?" he asked. I replied, "you know, after all these years of being coy about it, I think I'm finally ready to become a Rationalist. Is there, like, an initiation ritual or something?" Scott said, "Oh, you were already initiated a decade ago; you just didn't realize it at the time." Then he corrected himself: "two decades ago." The first thing I did, after coming out as a Rationalist, was to get into a heated argument with Other Scott A., Joe Carlsmith, and other fellow-Rationalists about the ideas I set out twelve years ago in my Ghost in the Quantum Turing Machine essay. Briefly, my argument was that the irreversibility and ephemerality of biological life, which contrasts with the copyability, rewindability, etc. of programs running on digital computers, and which can ultimately be traced back to microscopic details of the universe's initial state, subject to the No-Cloning Theorem of quantum mechanics, which then get chaotically amplified during brain activity ... might be a clue to a deeper layer of the world, one that we understand about as well as the ancient Greeks understood Newtonian physics, but which is the layer where mysteries like free will and consciousness will ultimately need to be addressed. I got into this argument partly because it came up, but partly also because this seemed like the biggest conflict between my beliefs and the consensus of my fellow Rationalists. Maybe part of me wanted to demonstrate that my intellectual independence remained intact--sort of like a newspaper that gets bought out by a tycoon, and then immediately runs an investigation into the tycoon's corruption, as well as his diaper fetish, just to prove it can. The funny thing, though, is that all my beliefs are the same as they were before. I'm still a computer scientist, an academic, a straight-ticket Democratic voter, a liberal Zionist, a Jew, etc. (all identities, incidentally, well-enough represented at LessOnline that I don't even think I was the unique attendee in the intersection of them all). Given how much I resonate with what the Rationalists are trying to do, why did it take me so long to identify as one? Firstly, while 15 years ago I shared the Rationalists' interests, sensibility, and outlook, and their stances on most issues, I also found them bizarrely, inexplicably obsessed with the question of whether AI would soon become superhumanly powerful and change the basic conditions of life on earth, and with how to make the AI transition go well. Why that, as opposed to all the other sci-fi scenarios one could worry about, not to mention all the nearer-term risks to humanity? Suffice it to say that empirical developments have since caused me to withdraw my objection. Sometimes weird people are weird merely because they see the future sooner than others. Indeed, it seems to me that the biggest thing the Rationalists got wrong about AI was to underestimate how soon the revolution would happen, and to overestimate how many new ideas would be needed for it (mostly, as we now know, it just took lots more compute and training data). Now that I, too, spend some of my time working on AI alignment, I was able to use LessOnline in part for research meetings with colleagues. A second reason I didn't identify with the Rationalists was cultural: they were, and are, centrally a bunch of twentysomethings who "work" at an ever-changing list of Berkeley- and San-Francisco-based "orgs" of their own invention, and who live in group houses where they explore their exotic sexualities, gender identities, and fetishes, sometimes with the aid of psychedelics. I, by contrast, am a straight, monogamous, middle-aged tenured professor, married to another such professor and raising two kids who go to normal schools. Hanging out with the Rationalists always makes me feel older and younger at the same time. So what changed? For one thing, with the march of time, a significant fraction of Rationalists now have marriages, children, or both--indeed, a highlight of LessOnline was the many adorable toddlers running around the Lighthaven campus. Rationalists are successfully reproducing! Some because of explicit pronatalist ideology, or because they were persuaded by Bryan Caplan's arguments in Selfish Reasons to Have More Kids. But others simply because of the same impulses that led their ancestors to do the same for eons. And perhaps because, like the Mormons or Amish or Orthodox Jews, but unlike typical secular urbanites, the Rationalists believe in something. For all their fears around AI, they don't act doomy, but buzz with ideas about how to build a better world for the next generation. At a LessOnline parenting session, hosted by Julia Wise, I was surrounded by parents who worry about the same things I do: how do we raise our kids to be independent and agentic yet socialized and reasonably well-behaved, technologically savvy yet not droolingly addicted to iPad games? What schooling options will let them accelerate in math, save them from the crushing monotony that we experienced? How much of our own lives should we sacrifice on the altar of our kids' "enrichment," versus trusting Judith Rich Harris that such efforts quickly hit a point of diminishing returns? A third reason I didn't identify with the Rationalists was, frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru. Eliezer writes in parables and koans. He teaches that the fate of life on earth hangs in the balance, that the select few who understand the stakes have the terrible burden of steering the future. Taking what Rationalists call the "outside view," how good is the track record for this sort of thing? OK, but what did I actually see at Lighthaven? I saw something that seemed to resemble a cult only insofar as the Beatniks, the Bloomsbury Group, the early Royal Society, or any other community that believed in something did. When Eliezer himself--the bearded, cap-wearing Moses who led the nerds from bondage to their Promised Land in Berkeley--showed up, he was argued with like anyone else. Eliezer has in any case largely passed his staff to a new generation: Nate Soares and Zvi Mowshowitz have found new and, in various ways, better ways of talking about AI risk; Scott Alexander has for the last decade written the blog that's the community's intellectual center; figures from Kelsey Piper to Jacob Falkovich to Aella have taken Rationalism in new directions, from mainstream political engagement to the ... err ... statistical analysis of orgies. I'll say this, though, on the naysayers' side: it's really hard to make dancing to AI-generated pop songs about Bayes' theorem and Tarski's definition of truth not feel cringe, as I can now attest from experience. The cult thing brings me to the deepest reason I hesitated for so long to identify as a Rationalist: namely, I was scared that if I did, people whose approval I craved (including my academic colleagues, but also just randos on the Internet) would sneer at me. For years, I searched of some way of explaining this community's appeal so reasonable that it would silence the sneers. It took years of psychological struggle, and (frankly) solidifying my own place in the world, to follow the true path, which of course is not to give a shit what some haters think of my life choices. Consider: five years ago, it felt obvious to me that the entire Rationalist community might be about to implode, under existential threat from Cade Metz's New York Times article, as well as RationalWiki and SneerClub and all the others laughing at the Rationalists and accusing them of every evil. Yet last week at LessOnline, I saw a community that's never been thriving more, with a beautiful real-world campus, excellent writers on every topic who felt like this was the place to be, and even a crop of kids. How many of the sneerers are living such fulfilled lives? To judge from their own angry, depressed self-disclosures, probably not many. But are the sneerers right that, even if the Rationalists are enjoying their own lives, they're making other people's lives miserable? Are they closet far-right monarchists, like Curtis Yarvin? I liked how The New Yorker put it in its recent, long and (to my mind) devastating profile of Yarvin: The most generous engagement with Yarvin's ideas has come from bloggers associated with the rationalist movement, which prides itself on weighing evidence for even seemingly far-fetched claims. Their formidable patience, however, has also worn thin. "He never addressed me as an equal, only as a brainwashed person," Scott Aaronson, an eminent computer scientist, said of their conversations. "He seemed to think that if he just gave me one more reading assignment about happy slaves singing or one more monologue about F.D.R., I'd finally see the light." The closest to right-wing politics that I witnessed at LessOnline was a session, with Kelsey Piper and current and former congressional staffers, about the prospects for moderate Democrats to articulate a moderate, pro-abundance agenda that would resonate with the public and finally defeat MAGA. But surely the Rationalists are incels, bitter that they can't get laid? Again, the closest I saw was a session where Jacob Falkovich helped a standing-room-only crowd of mostly male nerds confront their fears around dating and understand women better, with Rationalist women eagerly volunteering to answer questions about their perspective. Gross, right? (Also, for those already in relationships, Eliezer's primary consort and former couples therapist Gretta Duleba did a session on relationship conflict.) So, yes, when it comes to the Rationalists, I'm going to believe my own lying eyes over the charges of the sneerers. The sneerers can even say about me, in their favorite formulation, that I've "gone mask off," confirmed the horrible things they've always suspected. Yes, the mask is off--and beneath the mask is the same person I always was, who has an inordinate fondness for the Busy Beaver function and the complexity class BQP/qpoly, and who uses too many filler words and moves his hands too much, and who strongly supports the Enlightenment, and who once feared that his best shot at happiness in life would be to earn women's pity rather than their contempt. Incorrectly, as I'm glad to report. From my nebbishy nadir to the present, a central thing that's changed is that, from my family to my academic colleagues to the Rationalist community to my blog readers, I finally found some people who want what I have to sell. --------------------------------------------------------------------- Unrelated Announcements: My replies to comments on this post might be light, as I'll be accompanying my daughter on a school trip to the Galapagos Islands! A few weeks ago, I was "ambushed" into leading a session on philosophy and theoretical computer science at UT Austin. (I.e., asked to show up for the session, but thought I'd just be a participant rather than the main event.) The session was then recorded and placed on YouTube--and surprisingly, given the circumstances, some people seemed to like it! Friend-of-the-blog Alon Rosen has asked me to announce a call for nominations for a new theoretical computer science prize, in memory of my former professor (and fellow TCS blogger) Luca Trevisan, who was lost to the world too soon. And one more: Mahdi Cheraghchi has asked me to announce the STOC'2025 online poster session, registration deadline June 12; see here for more. Incidentally, I'll be at STOC in Prague to give a plenary on quantum algorithms; I look forward to meeting any readers who are there! Email, RSS Follow This entry was posted on Monday, June 9th, 2025 at 8:02 pm and is filed under Adventures in Meatspace, Announcements, Embarrassing Myself, Nerd Interest, Nerd Self-Help, Obviously I'm Not Defending Aaronson. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. 52 Responses to "Guess I'm A Rationalist Now" 1. cwillu Says: Comment #1 June 10th, 2025 at 1:06 am I'm reminded somewhat of the "Culture" from the books, which doesn't have borders so much as a core that gradually tapers off. 2. Edan Maor Says: Comment #2 June 10th, 2025 at 1:13 am Bravo! I "identified" as a rationalist around 2012-2013, though I don't have any large online platform where I announced this to the world. And being from Israel, I didn't really meet any of the rationality community in SF. I had independently followed your blog since about 2009, IIRC, and looked up to you a lot (still do). And one of my favorite things in the world is that you gradually started talking about rationality positively. It reinforced that I wasn't crazy - someone outside of the rat-bubble *also* saw what I saw. (I feel similarly about your posts on Israel, btw.) I wish I could've been at LessOnline. I've since met a few of the people but would love to meet everyone. Hope this happens again next year. 3. wb Says: Comment #3 June 10th, 2025 at 7:39 am I find the term Rationalist a bit confusing, because it is already used in philosophy with a different meaning (I think), as in rationalism vs. empiricism. 4. Quasiparticular Says: Comment #4 June 10th, 2025 at 9:18 am I've always assumed the coupling of the Rationalists, both those active in its online community and those who have much in common with them (like scientific skeptics), to the likes of Curtis Yarvin was a reflexive political defense, the result of a reactive algorithm guiding those patrolling political extremes (the both of them) to score points with their in-group and summarily reject outsider input on spurious bases (usually "bad faith"). Ask one of these patrol people for a citation and you get a wojack meme mocking the request. Present a really good argument, they will look for any reason to nullify it. 5. Adam H Says: Comment #5 June 10th, 2025 at 9:24 am (Rationalists(if you're not then your irrational) is kinda a meaningless term unless you distinguish it from empiricists but then we know that the hard line has philosophically collapsed via Quine.) But yeah, Eli has crowned himself the know-it-all guru of AI alignment. And I've read through enough of his material to see that he makes silly errors and he can't possibly master (cause no one can) all the material he tries to synthesize into his rationalist "program." He speaks with greatly broad strokes over many areas. He's right but doesn't know when he's wrong or at best naive and incomplete. He's just not the expert that he claims to be. But I'll take him more as a prophet. Pope Eli. 6. Scott Says: Comment #6 June 10th, 2025 at 9:51 am wb #3: To make things even more confusing, in their love for Bayesian epistemology, the modern Rationalists arguably have more in common with the 17th-century Empiricists than they do with the 17th-century Rationalists. On the other hand, some would say that they share with the 17th-century Rationalists an extreme optimism in their own ability to figure things out from first principles. Anyway, I didn't invent these names! And I get really annoyed when others try to score ideological points by refusing to use the accepted names for things, so I resolved to avoid doing that myself. 7. Matan Says: Comment #7 June 10th, 2025 at 10:15 am I'm very happy for you, Scott . 8. Simon Lermen Says: Comment #8 June 10th, 2025 at 10:24 am Not sure you are aware of it, but the option to follow your blog through email with follow-it is not working particularly well. For one, when I get an email about a new post, it doesn't actually direct me to your blog but a clone of your blog post on their website: https://follow.it/mail/newspaper/PhRKY8fTt0b1Jb7lBz7IiSms0NF0o9qz Also the email and everything else is full of ads. Not sure about a better solution. 9. Philip Weiss Says: Comment #9 June 10th, 2025 at 11:25 am Hey Scott- RE your UT Austin talk- I asked a question on the philosophy stack exchange related to the question of mathematical existance which I feel has not gotten a proper answer. https://philosophy.stackexchange.com/questions/127861/ do-philosophers-distinguish-between-computable-non-computable-existence Maybe you have a better response? 10. Daniel Says: Comment #10 June 10th, 2025 at 12:59 pm Hi Scott! As a Bay Area nerd, you have convinced me to check out events at Lighthaven, but... as an unemployed broke person, I am unemployed and broke. Is there anything going on at the space that can be engaged with on such a budget? Like, can I just show up with my laptop one day and there will be interesting conversations for me to engage in without needing the infrastructure (and presumptive financial commitment) of formal conference attendance? 11. Scott Says: Comment #11 June 10th, 2025 at 1:03 pm Philip Weiss #9: The philosophers willing to toy with ideas like "mathematical entities don't exist unless you give me an algorithm to construct those entities" go by names like "constructivist" or "intuitionist," or at the extreme end where you deny even the reality of sufficiently large positive integers, "ultrafinitists." In case it helps clarify, I personally reject all of those positions--they strike me as hopelessly convoluted ways of speaking. I just find it interesting to explore whether, among all the (100% objectively determined) entities of arithmetic, we "know" the ones associated with polynomial-time algorithms in a sense we don't "know" the ones associated only with exponential-time algorithms, or with no algorithms at all. 12. Scott Says: Comment #12 June 10th, 2025 at 1:29 pm Daniel #10: Probably, yes! Get in touch with Ben Pace and/or Oliver Habryka, who help run things there. 13. Harald Says: Comment #13 June 10th, 2025 at 3:00 pm I like Scott Alexander (the writer; I don't know the person) and would gladly go to a meeting like the one you describe if there were one any close to where I live. I find the "Rationalist" name a bit silly, in part because it lays claim to something that a label that a few hundred millions of people outside that movement can fairly claim (why not "The Good People"?) and in part because of reasons that you've put your finger on or very close to by pointing out the mismatch wrt empiricism vs. rationalism. I've been called "Rationalism-adjacent" by a well-meaning, non-Rationalist friend, and I wasn't remotely offended - I just had a genuine "who, me?" reaction. I got annoyed when another friend posted an easy satire making trivial, personal jibes at EA (and, I think Rationalism, its evil twin brother or some such thing): apparently the point was that it was created by sweet, dumb, pimply nerds who got friend-zones by cute girls and couldn't get into any place better than Berkeley. And yet there are plenty of obvious, valid criticisms one can make, not the least its obvious amateurism. In this respect, Scott Alexander is a welcome change from Yudkovsky, and not coincidentally he is less of an amateur (some of his greatest successes (most notably, a closely argued, much-read case for masking made early on in the pandemic) have been in the general field he is actually a professional in). I see part of where this is all coming from - a deep and partly reasonable mistrust of what fields like sociology, anthropology and (perhaps for different reasons) philosophy may be up to, combined with being genuinely drawn to the questions they study (but, oddly enough, that doesn't lead to a serious engagement with their methods). Of course Scott Alexander seems to be drawn to everything, and his blog can be, at its best, like a virtual salon. At its best. It attracts odd crowds at times. Since you mentioned empiricists and rationalists, exercise: discuss Saint-Simonians. (That's even more to the point for EA and Progress Studies than for the circle around Astral Codex Ten, surely?) 14. Simon Lermen Says: Comment #14 June 10th, 2025 at 3:02 pm Daniel #10: From next Monday, lighthaven will host the MATS program (https://www.matsprogram.org/) until late august. This will be focused on AI safety. I believe MATS scholar can invite guests, so that is one option. I'll be there as a scholar. 15. Fred H Says: Comment #15 June 10th, 2025 at 4:06 pm I'm sorry, but bringing children (toddlers!) to an event with rampant illegal drug abuse (to say nothing of the promiscuous sex and other general debauchery) sounds deeply irresponsible--both because of the obvious safety implications, and also because you're teaching these young ones that illegal drug abuse is okay. 16. Doug S. Says: Comment #16 June 10th, 2025 at 4:23 pm Once again, I am left disappointed that I live on the wrong coast of the United States and probably can't afford to live in an area with what are probably the most expensive housing costs in the entire United States. :/ 17. Scott Says: Comment #17 June 10th, 2025 at 4:23 pm Fred H #15: If it matters, I can testify that I saw zero signs of any kids exposed to anything inappropriate--they were just running around having fun, eating, getting their diapers changed, etc. If any drug-fueled orgies happened (I wouldn't know ), they were out of general view. 18. Scott Says: Comment #18 June 10th, 2025 at 4:26 pm Doug S. #16: I don't live there either! (Not since grad school 20+ years ago.) I just come for events--pretty hard to avoid, since Berkeley is now the worldwide capital of quantum complexity theory and Rationalism and AI safety research. If I lived there today, I'd worry that I'd spend so much time meeting people that I'd never get anything done. 19. Philip Weiss Says: Comment #19 June 10th, 2025 at 4:49 pm > we "know" the ones associated with polynomial-time algorithms in a sense we don't "know" the ones associated only with exponential-time algorithms, or with no algorithms at all. It reminds me of light cones. As time passes, where light can reach expands as well. There is in some sense a "computation" cone. If there is an upper bound on computation per unit time in the universe, then there is also a set of "reachable" computations that expands out as time expands. The algorithms that exist help define the boundary of that cone. 20. Daniel Says: Comment #20 June 10th, 2025 at 7:10 pm Scott #12: Thank you, will do! Simon Lermen #14: On review, this looks very exciting, and reminds me of some of my fonder grad school-era memories I doubt I have the background needed to contribute anything to the technical side of the discourse, it having been maybe 5 or 6 years since I even semi-seriously engaged with CS theory topics, but certainly my interest level in attending a large number of tech talks from AI alignment scholars is quite high. I would definitely be interested in attending as a guest, if you would really suffer no opportunity cost from inviting me, and it wouldn't reflect badly on you to have added to the program someone who is really no more than a hobbyist and whom you met in the comment section of Shtetl-Optimized 21. Sniffnoy Says: Comment #21 June 10th, 2025 at 9:46 pm Man, I am really thinking I should have gone to LessOnline this year. It really seems like I could have met a bunch of cool people! But the possibility didn't occur to me until, like, a few days before, and no way was I planning a California trip at that point. Oh well, I guess I'll go next year perhaps? I'll admit a big part of my reluctance to go has been not just the distance but also my basic sense that the east coast is better and they ought to hold it over here. (Not necessarily in NYC, just, y'know, on the east coast. Somewhere inbetween DC and Boston.) But, I guess that's not a thing that's going to happen, because they have Lighthaven out west, and nothing equivalent out east. I do have to wonder how things might have developed differently had Eliezer and co. lived in Boston instead of the Bay. On that note, I feel like the stereotype of the rat community wrt drugs has been that people in it are either really into drugs or really opposed to drugs, with it being inbetween that's unusual. How true that really is, I'm not sure, but... (But also, y'know, there's communities outside the Bay that don't have all those features, and are likely better for it. Though some of said features might be good to import...) 22. Sniffnoy Says: Comment #22 June 10th, 2025 at 9:57 pm wb #3, Scott #6: Yes, the terminological overloading is confusing; "rationalism" in this sense is unrelated to the philosophical sense. But I feel like I should point out that it's not the first time this has happened -- consider the Rationalist Association started in the late 19th century (and the Rationalist Society of Australia started shortly afterward). So you could say we have the philosophical sense, the 19th/20th century sense, and the 21st century sense. Except, I would say that the latter two have quite a bit of overlap! I don't think Eliezer Yudkowsky would agree much with philosophical rationalism but I expect he would have quite a bit in common with the people of the Rationalist Association and RSA. 23. Sniffnoy Says: Comment #23 June 10th, 2025 at 10:21 pm Harald #13: I find the "Rationalist" name a bit silly, in part because it lays claim to something that a label that a few hundred millions of people outside that movement can fairly claim (why not "The Good People"?) I feel like I should point out here, I'm not sure this name was internally generated. Eliezer Yudkowsky in the sequences always spoke of "aspiring rationalists" -- it's an ideal to be aspired to, not something you can actually be. But, of course, terms naturally get clipped, or maybe its origin was something else... I don't know but yeah unfortunately the label "rationalists" was probably inevitable. But, while I don't like it, I don't think it's quite as bad as you suggest; I don't think the quoted criticism is a good one. And I should point out that people make the same criticism of "Effective Altruism"! I don't think it's a good criticism there, either. The fact is that actually there are quite a lot of people who don't care that much about good thinking, who believe in ghosts and horoscopes and whatnot. A thing I like to point out is that Yudkowsky's writings on rationality have quite a bit of continuity with earlier writing on rationality by people like Feynman and Sagan. It certainly has its own distinctive aspects, and certainly the rationalist community is mostly socially quite separate from the older (and aging) skeptic community, but like, ultimately they're coming from the same place. I don't think the rationalist community focuses too much on debunking UFOs and such because, well, that's just not considered that relevant at the moment (the 90s were a less eventful time, seems like! When conspiracy theories were funny instead of horrifying), but there's still quite a lot of bad epistemology out there, you know? (What about the 19th century Rationalist Association, made up of freethinkers and secularists? Should they not have used that name? I dunno, seems pretty appropriate to me! Here in New York we've actually on one or two occasions read some of their stuff at rat meetups...) Like, I think there are actually are a number of people who would oppose the idea of "rationality". This might be because they associate it with Straw Vulcans, and don't understand that rationality is not the same thing as acting like a cartoon robot. They might be hippie-ish people who promote relying primarily on one's feelings. (The former two categories admittedly probably overlap heavily, so that's not really two categories.) They might be religious traditionalists who say we're supposed to obey God, not think for ourselves. So saying "yes, rationality is good" may sound like a vacuous statement when you're among a better crowd -- but out among the general public, it really isn't! 24. SR Says: Comment #24 June 11th, 2025 at 12:34 am I have mixed feelings about rationalism. My first exposure was in college, where I went to EA club meetings occasionally, as well as hung out with rationalist-adjacent friends who would frequently discuss puzzles relating to utilitarianism, game theory, etc. I felt that the EA folks were well-intentioned but that the entire movement was a bit culty, that the pitch they made to convince people to join was manipulative, and that they had not fully thought through the philosophical foundations of their program. The SBF saga has personally confirmed those intuitions to some extent. The puzzles I would discuss with my friends were interesting for a while, but I gradually became rather jaded with them. The "rational" answer to many of these puzzles is to behave psychopathically (e.g. pushing a fat man off a bridge in front of a trolley). This is supposed to expose how our intuitions are fallible, and make us become more reflective. However, in real life, we never face such situations, and our intuitions have largely been shaped in the "normal" way for good reasons by evolution and our life experiences. I feel like critically examining these sorts of intuitions instead makes people more likely to behave badly in their everyday lives, as they will jump to idealized "rational" solutions while ignoring important side constraints. Since then, I have mainly read rationalists online, and have learned to ignore the provocateurs. I do greatly admire the rationalist commitment to picking up new intellectual tools and incorporating them into their worldview. On the other hand, I do think they overestimate the uniqueness and depth of their 'canon', which can largely be picked up by taking a few college level classes on economics, statistics, philosophy. I also think they tend to focus on the wrong fields. Utilitarianism, decision theory, computability, etc. all seem to me like emergent concepts; they do not track the structure of reality at as deep a level as do physics and evolution. I don't know why, for instance, there are rationalists worried about the universe being generated by the Solomonoff prior rather than trying to learn quantum field theory. Or now, why they don't seem to be trying harder to apply our understanding of biological evolution to forecasting future progress on neural nets. All that said, at least they are sincerely trying to understand the world around them, which makes them better in my book than pretty much everyone else. I don't engage with much rationalist content online anymore. I've realized that I personally gain much more by directly reading textbooks, histories, etc., and I think rationalists who don't already would feel the same way if they were to try it. But I will credit the rationalists for making me realize that this is indeed a thing I could do in the first place! Obviously it's not some arcane knowledge that one can read books, but it was genuinely a revelation to me (even coming from a technical background) that (1) one can pick up basic training in fields outside of one's expertise, instead of relying on expert opinion, (2) one can reach correct and nonobvious conclusions in numerous domains by systematically reasoning in a logical and relatively unbiased manner, and (3) that one can incorporate such an enriching mindset into one's daily life even past one's college years. For this I will always be grateful. 25. abdessamed gtumsila Says: Comment #25 June 11th, 2025 at 3:14 am Thank you for the article. I felt like I was walking with you through Lighthaven! Your style is realistic and honest, and it's nice to see how your perspective has changed over time. 26. Michael Gogins Says: Comment #26 June 11th, 2025 at 7:39 am Sniffnoy #23: Clearly there are indeed many religious traditionalists who do their best to inculcate conformity. However... Luke 12:57 has Jesus asking: "Why don't you judge for yourselves what is right?" Religion is not a single "bloc." (I am not a Christian.) 27. Your friend Says: Comment #27 June 11th, 2025 at 7:43 am Scott, I need to tell you something. Can you please shoot me an email? I'm not even asking for a phone call this time. I just want to tell you a story by email. Please respond to this. If you don't respond, I will keep asking every single day. Please just respond. Thanks. 28. EliRational Says: Comment #28 June 11th, 2025 at 10:28 am Welcome to the club! Like a good rationalist, you've inspired a manifold prediction with your musings: https://manifold.markets/ HawkeyePierce/a-scott-aaronson-style-device-predi 29. Simon Lermen Says: Comment #29 June 11th, 2025 at 11:47 am Daniel #20 I can figure out what I can do, how should i contact you? I am on twitter with my name @simonlermenai 30. Henning Says: Comment #30 June 11th, 2025 at 12:48 pm Delighted that you found your "tribe". The event sounds like great fun (as does the Galapagos school trip). 31. Teddy Says: Comment #31 June 11th, 2025 at 3:30 pm Scott, I would like to offer a rebuttal to your comment #19 on your post dated April 30, 2025. Here goes: You're wrong -- the [?](N/K) scaling is not actually what Grover's algorithm achieves in the multiple-solution case. That formula is a heuristic extrapolation, not a rigorous bound. The original Grover algorithm is designed to amplify the amplitude of a single marked state, and its quadratic speedup arises from rotating the state vector in a well-defined 2D subspace. When K > 1, the geometry becomes higher-dimensional, and the reflection operators used in Grover's iteration no longer cleanly rotate toward the marked subspace. In fact, the standard Grover iteration becomes non-optimal without modification. Without precise amplitude balancing -- which depends on knowing K in advance -- you overshoot or undershoot the optimal success amplitude. That means the claimed [?](N/K) bound only holds if you already know K, or can estimate it -- but that's often as hard as solving the original problem. So no, Grover's algorithm doesn't "just get faster" with more solutions; in its pure form, it actually breaks unless carefully adjusted. 32. Julian Says: Comment #32 June 11th, 2025 at 4:05 pm This event sounds...truly awesome. I'm reminded of the utopian future society in The Culture series by Iain Banks. Will there be another similar event next year? My dad's side of the family is from the Bay Area, so I might be there! 33. Scott Says: Comment #33 June 11th, 2025 at 5:51 pm Teddy #30: Yawn. 34. Daniel Says: Comment #34 June 11th, 2025 at 6:45 pm Simon Lernen #28: I'm not on Twitter, but you can reach me by email at danielkbork@gmail.com Scott, thank you very much for facilitating this interaction! 35. Mohamad Mashal Says: Comment #35 June 12th, 2025 at 8:48 am hey sir, i'm a high school student who's been learning about quantum physics on my own, mostly out of obsession. i had a weird thought i can't shake, and i wanted to ask someone smarter if there's a clear reason it doesn't work. basically: what if you had a machine that kept measuring an entangled particle over and over, but only kept the result if it was what you wanted (like "spin up"), and deleted the rest instantly? and only when it sees that right result, it does something -- like triggers a classical signal or lines up a timestamp. if both sides had the same setup and rules, could that sort of act like a yes/no message? not by forcing the outcome, just by waiting and reacting only to the right one. i get that entanglement isn't supposed to send info, but what if the trick is not in the outcome, but it's in how long you wait before you log it? has anyone tried something like this? or is there something obvious i'm missing? appreciate you reading either way, have a great day 36. Scott Says: Comment #36 June 12th, 2025 at 10:19 am Mohamad #35: Sorry, that's still not going to work to send a faster-than-light signal, because there's no operation of "deleting the result if it's not what you want" that would cause the faraway party to notice that you did it. 37. OMG Says: Comment #37 June 13th, 2025 at 9:36 am Next time just claim mid-life crisis after the party. 38. William Gasarch Says: Comment #38 June 13th, 2025 at 10:34 am Your post raises the question of `What is a cult?' One defining feature which I do not think Rationalists have is as follow: No allowing for dissent. The site less-wrong which is rationalist is happy with dissent and opposing viewpoints and intelligent discussion. IF less wrong is the poster child for rationalists (is it?) then Rationalism is not a cult. 39. Eric Cordian Says: Comment #39 June 13th, 2025 at 12:22 pm Hi Scott, Congratulations on being promoted to Rationalist. If you've got a moment, I'd love to hear your rough estimate for what year you think humanity will reach the following milestones. 1. We achieve Artificial General Intelligence. 2. We achieve Artificial Superintelligence. 3. Humanoid robots are as ubiquitous as cars. 4. More new humans are 3D printed from lab grown cell lines than are grown from embryos. 5. The organic brains of many humans have silicon coprocessors implanted next to them. 6. P vs NP is resolved. 7. We answer the question "What is consciousness?" 40. Armin Says: Comment #40 June 13th, 2025 at 4:29 pm I just happened to come across this article by David Brooks, which explicitly blames the current political divide in the US on a rationalist approach to education. I am not entirely convinced by the argument (because I think the problem is more complex than that), but the idea of trying to define an elite by more than just IQ does make sense to me. Should you decide to read it, I'd appreciate knowing your thoughts. https://www.theatlantic.com/magazine/archive/2024/12/ meritocracy-college-admissions-social-economic-segregation/680392 /? 41. A Says: Comment #41 June 13th, 2025 at 8:01 pm Harald #13: > Since you mentioned empiricists and rationalists, exercise: discuss Saint-Simonians. (That's even more to the point for EA and Progress Studies than for the circle around Astral Codex Ten, surely?) I think in Rationalist circles, you would find a variety of views on political economy not too different from a demographically similar random sample. And on the progressive side, something like Georgism is going to be substantially more relevant than Saint-Simonism, but that is for cultural reasons, not for subcultural reasons (i.e. it would also be present in the demographically similar random sample). 42. OMG Says: Comment #42 June 14th, 2025 at 2:37 am Armin#40 I read your link and it appears to me a mishmash of grievances. As an example this- " The meritocracy was supposed to sort people by innate ability. But what it really does is sort people according to how rich their parents are. As the meritocracy has matured, affluent parents have invested massively in their children so they can win in the college-admissions arms race. " So part of his article is an attack on meritocracies and the other half makes the point there isn't a meritocracy in the US at this time. He includes the usual references to social studies with weak results and completely ignoring the impact of the quality of the college itself as a factor in determining outcomes. In fact one of the studies he references found that children in Tennessee that had good teachers in Kindergarten tended to do well later in life. So the quality of the education itself was material to the outcome later in life, Imagine that. I don't believe the rise of Trump and other similar politicians has anything at all to do with the quality of students at elite universities but is completely to do with what they are taught as part of an elite education in the US and West. He includes a point often used in similar articles that an IQ study of 1500 children had no Nobel Prize winners which is just the likelihood of a sample of that size. Intelligence isn't a sufficient condition to win a Nobel Prize but it is a necessary condition. I know my opinion was unsolicited so pardon these comments. 43. TK Says: Comment #43 June 14th, 2025 at 7:20 pm Suppose a bookish turkey lives on a turkey farm. Certain conspiracy-inclined turkeys on the farm believe without evidence that their beloved farmer intends to kill them. However our bookish turkey hero says wisely that, "Each day, the farmer feeds us multiple square meals. There's no evidence that he intends to kill anyone. He protects us from predators. When a turkey is ill, the farmer nurses it to health. All of this is further proof that the farmer loves turkeys." As the days, weeks and months roll on, each day the turkey can input this new data into his Bayesian algorithm that shows the increasing degree of certainty that farmers love and care for turkeys. Eventually, however, Thanksgiving comes and the farmer kills all the turkeys. This is a "black swan" event. By the standard of turkey science, this is a paranormal event--indeed a supernatural one, transcending the normal principles of how nature has been established to function. But if the late bookish turkey were David Hume, his ghost, hovering over the Thanksgiving meal would still confidently be able to say: "Look, this might look bad, but I have more reason to trust objective science and the statistics we've built over years. Sure, I have anecdotal and subjective 'evidence' that perhaps farmers do kill turkeys, but it is always more likely that I've misperceived this and I am actually still alive and being fed well on the farm." 44. Edan Maor Says: Comment #44 June 15th, 2025 at 1:57 am One other note: I got around to seeing the video of your talk, and I think it was fantastic. I especially recommend it for people like me who've consumed a few of your talks/podcasts over the years, because this talk actually included relatively "new" things. A lot of the questions from the audience were high-level which helped a lot. I found the discussion at the beginning of what it means to "know" a number especially interesting, though I don't feel like it was resolved! Though I have to say, I'm surprised you didn't go for the trivial counterexample to "we have to know all its digits", which to me is "ok, so do we know sqrt(2)?". 45. hwold Says: Comment #45 June 15th, 2025 at 4:33 am > A second reason I didn't identify with the Rationalists was cultural: they were, and are, centrally a bunch of twentysomethings who "work" at an ever-changing list of Berkeley- and San-Francisco-based "orgs" of their own invention, and who live in group houses where they explore their exotic sexualities, gender identities, and fetishes, sometimes with the aid of psychedelics. I don't understand why so many people put this in the central definition of "rationalist community". It does not match the people over the world that will never set a foot on american soil, let alone bay area, that hang around online at lesswrong & astral codex ten. gwern for example is very much a part of the rationalist community and does not match that description, so how did it come up at the first place ? Why are ppl taking it seriously ? 46. Ben Says: Comment #46 June 15th, 2025 at 2:31 pm Scott, it sounds like in some sense you have found your people, and after years of reading your blog and and agreeing with almost all of what you write (except the Quantum complexity parts, which I mostly don't understand), but seeing you disturbed by the uncivil reactions (and worse), I am quite happy for you! 47. Raoul Ohio Says: Comment #47 June 15th, 2025 at 3:32 pm OMG #42: 1. HaHa: the problem with meritocracy is that it is not meritocracy. OK. 2. One aspect in the rise in Trumpism (etc.) is changes in how news propagates. The centuries long system of lots of news outlets used to allow most people to find some balanced information. This has largely collapsed due to the internet sucking up all the ad revenue, so most independent news is gone. Meanwhile, huge money is poured into a few fake news organizations to promote crazy stuff and ultimately change the game to transfer more wealth to the wealthy. Add to this all the one person bloggers/influencers/etc who earn money by being outrageous, ..., 48. asdf Says: Comment #48 June 16th, 2025 at 1:20 am Off topic: certified quantum randomness in the lab again, sweet. https://www.nature.com/articles/s41586-025-09054-3 49. OMG Says: Comment #49 June 16th, 2025 at 3:11 am Raoul#47 1) Yes He makes a point about creativity being undervalued. I have met a couple people with say 99.99% + memories and agree that having a superb memory doesn't necessarily imply creativity. The way the education system works recall is sufficient to get one through superbly until maybe a PhD program in a technical area or resolving a novelproblem in private industry. Doing something novel in a technical field requires something more than just recall. Creativity in general just means making things up that have never happened, and likely never will happen. It may have entertainment value but little else. So like you I think media and politicians and the elite currently have way more creativity in the general sense than is optimum for society. Our society is drowning in general creativity and yet he claims "creativity" is under-appreciated. Creativity in the technical sense is some thought process other than recall that results in knowledge that provides new objective truth about the physical or mathematical world. I see no way to identify this capability before the fact. You identify students with high technical intellectual capacity and with good education some small portion of them will ultimately be able to increase knowledge about the physical world. I read some of this journalist's articles in the National Review (he was an intimate of William F Buckley of the old blue blood elite) and it seemed to me the same as this article. He attacks many things and then after the fact can point to something in triumph. I read one article generally as supporting Bush and Desert Storm with attacks on Bush mixed in. Then here he derides Desert Storm so I guess he can point to something in his older articles in triumph. All of this is just my personal observations and unfamiliar with any literature on this subject. 50. OMG Says: Comment #50 June 16th, 2025 at 9:26 am TK#43 I really enjoyed your post. Black swan from the viewpoint of the turkeys. 51. Fulmenius Says: Comment #51 June 16th, 2025 at 10:12 am It's relieving to read news from another world, where such rationalist meetings are possible and safe (at least for now). I salute your decision to finally stop giving a fuck about what the pathetic sneering trolls mutter through their teeth, and wish you and your daughter a good journey! Hope to attend such a meeting one day (in a better version of Moscow, where I am now, or at Lighthaven, where a better version of myself hopefully might get sometime). 52. Armin Says: Comment #52 June 17th, 2025 at 3:11 pm OMG #42 >So part of his article is an attack on meritocracies and the other half makes the point there isn't a meritocracy in the US at this time. I understood the article to say that an attempt to implement meritocracy failed (in the sense of selecting for the wrong sort of people to rise to the elite who then by their actions helped increase national division) both because it failed to capture merit in some broader and socially more beneficial sense and because it can be manipulated by throwing money at it. If we call this "pseudomerit" and let the first instance of "meritocracy" in your sentence refer to it while the second refers to merit in the broader sense, then we get what I understood to be a main argument of the article. >I don't believe the rise of Trump and other similar politicians has anything at all to do with the quality of students at elite universities but is completely to do with what they are taught as part of an elite education in the US and West. I agree that in full generality his argument is not so convincing, but if we restrict ourselves to political, legal and financial elites, there may be something to it. I say this because I think the shrinking of the middle class, growing wealth inequality, the rise of the working poor etc. over the last 50 years seems a direct consequence of the collective actions of these people, and I think the rise of Trump is at least in part a reaction to these trends. A counterpoint to this is that, as you said, their societally harmful actions may not reflect who they are but what they are taught, but IMO an even stronger one may be that this is a systemic problem, in the sense that our society is structured so that even the best-intentioned persons are incentivized/compelled to act as they did the last 50 years. My counterpoint to these counterpoints is that they discount the agency of these people. If anyone has agency over their actions, it is those in the highest positions of society. >I know my opinion was unsolicited so pardon these comments. On the contrary, you made points well worth thinking about, so I appreciate it. Leave a Reply You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations. Comment Policies: After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy: All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them. At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet. To the many who've asked me for this over the years, you're welcome! [ ] Name (required) [ ] Mail (will not be published) (required) [ ] Website [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [Submit Comment] [ ] [ ] [ ] [ ] [ ] [ ] [ ] D[ ] --------------------------------------------------------------------- Shtetl-Optimized is proudly powered by WordPress Entries (RSS) and Comments (RSS).