https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. LESSWRONG LW Login My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) by jessicata32 min read16th Oct 2021372 comments 73 Center for Applied Rationality (CFAR)Machine Intelligence Research Institute (MIRI)Leverage Research Personal Blog My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) Background: choosing a career Trauma symptoms and other mental health problems Why do so few speak publicly, and after so long? Strange psycho-social-metaphysical hypotheses in a group setting World-saving plans and rarity narratives Debugging Other issues Conclusion 373 comments I appreciate Zoe Curzi's revelations of her experience with Leverage. I know how hard it is to speak up when no or few others do, and when people are trying to keep things under wraps. I haven't posted much publicly about my experiences working as a researcher at MIRI (2015-2017) or around CFAR events, to a large degree because I've been afraid. Now that Zoe has posted about her experience, I find it easier to do so, especially after the post was generally well-received by LessWrong. I felt moved to write this, not just because of Zoe's post, but also because of Aella's commentary: I've found established rationalist communities to have excellent norms that prevent stuff like what happened at Leverage. The times where it gets weird is typically when you mix in a strong leader + splintered, isolated subgroup + new norms. (this is not the first time) This seemed to me to be definitely false, upon reading it. Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019). I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases. I also caution against blame in general, in situations like these, where many people (including me!) contributed to the problem, and have kept quiet for various reasons. With good reason, it is standard for truth and reconciliation events to focus on restorative rather than retributive justice, and include the possibility of forgiveness for past crimes. As a roadmap for the rest of the post, I'll start by describing some background, describe some trauma symptoms and mental health issues I and others have experienced, and describe the actual situations that these mental events were influenced by and "about" to a significant extent. Background: choosing a career After I finished my CS/AI Master's degree at Stanford, I faced a choice of what to do next. I had a job offer at Google for machine learning research and a job offer at MIRI for AI alignment research. I had also previously considered pursuing a PhD at Stanford or Berkeley; I'd already done undergrad research at CoCoLab, so this could have easily been a natural transition. I'd decided against a PhD on the basis that research in industry was a better opportunity to work on important problems that impact the world; since then I've gotten more information from insiders that academia is a "trash fire" (not my quote!), so I don't regret this decision. I was faced with a decision between Google and MIRI. I knew that at MIRI I'd be taking a pay cut. On the other hand, I'd be working on AI alignment, an important problem for the future of the world, probably significantly more important than whatever I'd be working on at Google. And I'd get an opportunity to work with smart, ambitious people, who were structuring their communication protocols and life decisions around the content of the LessWrong Sequences. These Sequences contained many ideas that I had developed or discovered independently, such as functionalist theory of mind, the idea that Solomonoff Induction was a formalization of inductive epistemology, and the idea that one-boxing in Newcomb's problem is more rational than two-boxing. The scene attracted thoughtful people who cared about getting the right answer on abstract problems like this, making for very interesting conversations. Research at MIRI was an extension of such interesting conversations to rigorous mathematical formalism, making it very fun (at least for a time). Some of the best research I've done was at MIRI (reflective oracles, logical induction, others). I met many of my current friends through LessWrong, MIRI, and the broader LessWrong Berkeley community. When I began at MIRI (in 2015), there were ambient concerns that it was a "cult"; this was a set of people with a non-mainstream ideology that claimed that the future of the world depended on a small set of people that included many of them. These concerns didn't seem especially important to me at the time. So what if the ideology is non-mainstream as long as it's reasonable? And if the most reasonable set of ideas implies high impact from a rare form of research, so be it; that's been the case at times in history. (Most of the rest of this post will be negative-valenced, like Zoe's post; I wanted to put some things I liked about MIRI and the Berkeley community up-front. I will be noting parts of Zoe's post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened.) Trauma symptoms and other mental health problems Back to Zoe's post. I want to disagree with a frame that says that the main thing that's bad was that Leverage (or MIRI/CFAR) was a "cult". This makes it seem like what happened at Leverage is much worse than what could happen at a normal company. But, having read Moral Mazes and talked to people with normal corporate experience (especially in management), I find that "normal" corporations are often quite harmful to the psychological health of their employees, e.g. causing them to have complex PTSD symptoms, to see the world in zero-sum terms more often, and to have more preferences for things to be incoherent. Normal startups are commonly called "cults", with good reason. Overall, there are both benefits and harms of high-demand ideological communities ("cults") compared to more normal occupations and social groups, and the specifics matter more than the general class of something being "normal" or a "cult", although the general class affects the structure of the specifics. Zoe begins by listing a number of trauma symptoms she experienced. I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break. The psychotic break was in October 2017, and involved psychedelic use (as part of trying to "fix" multiple deep mental problems at once, which was, empirically, overly ambitious); although people around me to some degree tried to help me, this "treatment" mostly made the problem worse, so I was placed in 1-2 weeks of intensive psychiatric hospitalization, followed by 2 weeks in a halfway house. This was followed by severe depression lasting months, and less severe depression from then on, which I still haven't fully recovered from. I had PTSD symptoms after the event and am still recovering. During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. I was catatonic for multiple days, afraid that by moving I would cause harm to those around me. This is in line with scrupulosity-related post-cult symptoms. Talking about this is to some degree difficult because it's normal to think of this as "really bad". Although it was exceptionally emotionally painful and confusing, the experience taught me a lot, very rapidly; I gained and partially stabilized a new perspective on society and my relation to it, and to my own mind. I have much more ability to relate to normal people now, who are also for the most part also traumatized. (Yes, I realize how strange it is that I was more able to relate to normal people by occupying an extremely weird mental state where I thought I was destroying the world and was ashamed and suicidal regarding this; such is the state of normal Americans, apparently, in a time when suicidal music is extremely popular among youth.) Like Zoe, I have experienced enormous post-traumatic growth. To quote a song, "I am Woman": "Yes, I'm wise, but it's wisdom born of pain. I guess I've paid the price, but look how much I've gained." While most people around MIRI and CFAR didn't have psychotic breaks, there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis. There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one. I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape. (I knew the other person in question, and their own account was consistent with attempting to implant mental subprocesses in others, although I don't believe they intended anything like this particular effect). My own actions while psychotic later that year were, though physically nonviolent, highly morally confused; I felt that I was acting very badly and "steering in the wrong direction", e.g. in controlling the minds of people around me or subtly threatening them, and was seeing signs that I was harming people around me, although none of this was legible enough to seem objectively likely after the fact. I was also extremely paranoid about the social environment, being unable to sleep normally due to fear. There are even cases of suicide in the Berkeley rationality community associated with scrupulosity and mental self-improvement (specifically, Maia Pasek/SquirrelInHell, and Jay Winterford/ Fluttershy, both of whom were long-time LessWrong posters; Jay wrote an essay about suicidality, evil, domination, and Roko's basilisk months before the suicide itself). Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz. (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.) The cases discussed are not always of MIRI/CFAR employees, so they're hard to attribute to the organizations themselves, even if they were clearly in the same or a nearby social circle. Leverage was an especially legible organization, with a relatively clear interior/ exterior distinction, while CFAR was less legible, having a set of events that different people were invited to, and many conversations including people not part of the organization. Hence, it is easier to attribute organizational responsibility at Leverage than around MIRI/CFAR. (This diffusion of responsibility, of course, doesn't help when there are actual crises, mental health or otherwise.) Obviously, for every case of poor mental health that "blows up" and is noted, there are many cases that aren't. Many people around MIRI/ CFAR and Leverage, like Zoe, have trauma symptoms (including "cult after-effect symptoms") that aren't known about publicly until the person speaks up. Why do so few speak publicly, and after so long? Zoe discusses why she hadn't gone public until now. She first cites fear of response: Leverage was very good at convincing me that I was wrong, my feelings didn't matter, and that the world was something other than what I thought it was. After leaving, it took me years to reclaim that self-trust. Clearly, not all cases of people trying to convince each other that they're wrong are abusive; there's an extra dimension of institutional gaslighting, people telling you something you have no reason to expect they actually believe, people being defensive and blocking information, giving implausible counter-arguments, trying to make you doubt your account and agree with their bottom line. Jennifer Freyd writes about "betrayal blindness", a common problem where people hide from themselves evidence that their institutions have betrayed them. I experienced this around MIRI/CFAR. Some background on AI timelines: At the Asilomar Beneficial AI conference, in early 2017 (after AlphaGo was demonstrated in late 2016), I remember another attendee commenting on a "short timelines bug" going around. Apparently a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years. This trend in belief included MIRI/CFAR leadership; one person commented that he noticed his timelines trending only towards getting shorter, and decided to update all at once. I've written about AI timelines in relation to political motivations before (long after I actually left MIRI). Perhaps more important to my subsequent decisions, the AI timelines shortening triggered an acceleration of social dynamics. MIRI became very secretive about research. Many researchers were working on secret projects, and I learned almost nothing about these. I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects. I had disagreements with the party line, such as on when human-level AGI was likely to be developed and about security policies around AI, and there was quite a lot of effort to convince me of their position, that AGI was likely coming soon and that I was endangering the world by talking openly about AI in the abstract (not even about specific new AI algorithms). Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness [EDIT: Eliezer himself and Sequences-type thinking, of course, would aggressively disagree with the epistemic methodology advocated by this person]. I experienced a high degree of scrupulosity about writing anything even somewhat critical of the community and institutions (e.g. this post). I saw evidence of bad faith around me, but it was hard to reject the frame for many months; I continued to worry about whether I was destroying everything by going down certain mental paths and not giving the party line the benefit of the doubt, despite its increasing absurdity. Like Zoe, I was definitely worried about fear of response. I had paranoid fantasies about a MIRI executive assassinating me. The decision theory research I had done came to life, as I thought about the game theory of submitting to a threat of a gun, in relation to how different decision theories respond to extortion. This imagination, though extreme (and definitely reflective of a cognitive error), was to some degree re-enforced by the social environment. I mentioned the possibility of whistle-blowing on MIRI to someone I knew, who responded that I should consider talking with Chelsea Manning, a whistleblower who is under high threat. There was quite a lot of paranoia at the time, both among the "establishment" (who feared being excluded or blamed) and "dissidents" (who feared retaliation by institutional actors). (I would, if asked to take bets, have bet strongly against actual assassination, but I did fear other responses.) More recently (in 2019), there were multiple masked protesters at a CFAR event (handing out pamphlets critical of MIRI and CFAR) who had a SWAT team called on them (by camp administrators, not CFAR people, although a CFAR executive had called the police previously about this group), who were arrested, and are now facing the possibility of long jail time. While this group of people (Ziz and some friends/ associates) chose an unnecessarily risky way to protest, hearing about this made me worry about violently authoritarian responses to whistleblowing, especially when I was under the impression that it was a CFAR-adjacent person who had called the cops to say the protesters had a gun (which they didn't have), which is the way I heard the story the first time. Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively. This matches my experience. Like Zoe, I care about the people I interacted with during the time of the events (who are, for the most part, colleagues who I learned from), and I don't intend to cause harm to them through writing about these events. Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization. While I wasn't pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research). I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous (the blog post in question would have been similar to the AI forecasting work I did later, here and here; judge for yourself how dangerous this is). This made it hard to talk about the silencing dynamic; if you don't have the freedom to speak about the institution and limits of freedom of speech, then you don't have freedom of speech. (Is it a surprise that, after over a year in an environment where I was encouraged to think seriously about the possibility that simple actions such as writing blog posts about AI forecasting could destroy the world, I would develop the belief that I could destroy everything through subtle mental movements that manipulate people?) Years before, MIRI had a non-disclosure agreement that members were pressured to sign, as part of a legal dispute with Louie Helm. I was certainly socially discouraged from revealing things that would harm the "brand" of MIRI and CFAR, by executive people. There was some discussion at the time of the possibility of corruption in EA/ rationality institutions (e.g. Ben Hoffman's posts criticizing effective altruism, GiveWell, and the Open Philanthropy Project); a lot of this didn't end up on the Internet due to PR concerns. Someone who I was collaborating with at the time (Michael Vassar) was commenting on social epistemology and the strengths and weaknesses of various people's epistemology and strategy, including people who were leaders at MIRI/CFAR. Subsequently, Anna Salamon said that Michael was causing someone else at MIRI to "downvote Eliezer in his head" and that this was bad because it meant that the "community" would not agree about who the leaders were, and would therefore have akrasia issues due to the lack of agreement on a single leader in their head telling them what to do. (Anna says, years later, that she was concerned about bias in selectively causing downvotes rather than upvotes; however, at the time, based on what was said, I had the impression that the primary concern was about coordination around common leadership rather than bias specifically.) This seemed culty to me and some friends; it's especially evocative in relation to Julian Jaynes' writing about bronze age cults, which detail a psychological model in which idols/gods give people voices in their head telling them what to do. (As I describe these events in retrospect they seem rather ridiculous, but at the time I was seriously confused about whether I was especially crazy or in-the-wrong, and the leadership was behaving sensibly. If I were the type of person to trust my own judgment in the face of organizational mind control, I probably wouldn't have been hired in the first place; everything I knew about how to be hired would point towards having little mental resistance to organizational narratives.) Strange psycho-social-metaphysical hypotheses in a group setting Zoe gives a list of points showing how "out of control" the situation at Leverage got. This is consistent with what I've heard from other ex-Leverage people. The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors ("know-how") and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver's conscious goals. According to IFS-like psychological models, it's common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it's hard to rule out based on physics or psychology. As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. These strange experiences are, as far as I can tell, part of a more general social phenomenon around that time period; I recall a tweet commenting that the election of Donald Trump convinced everyone that magic was real. Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community. While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible. (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.) As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization. The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with. Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas. Alternatively, like me, they can explore these metaphysics while: * losing days of sleep * becoming increasingly paranoid and anxious * feeling delegitimized and gaslit by those around them, unable to communicate their actual thoughts with those around them * fearing involuntary psychiatric institutionalization * experiencing involuntary psychiatric institutionalization * having almost no real mind-to-mind communication during "treatment" * learning primarily to comply and to play along with the incoherent, shifting social scene (there were mandatory improv classes) * being afraid of others in the institution, including being afraid of sexual assault, which is common in psychiatric hospitals * believing the social context to be a "cover up" of things including criminal activity and learning to comply with it, on the basis that one would be unlikely to exit the institution within a reasonable time without doing so Being able to discuss somewhat wacky experiential hypotheses, like the possibility of people spreading mental subprocesses to each other, in a group setting, and have the concern actually taken seriously as something that could seem true from some perspective (and which is hard to definitively rule out), seems much more conducive to people's mental well-being than refusing to have that discussion, so they struggle with (what they think is) mental subprocess implantation on their own. Leverage definitely had large problems with these discussions, and perhaps tried to reach more intersubjective agreement about them than was plausible (leading to over-reification, as Zoe points out), but they seem less severe than the problems resulting from refusing to have them, such as psychiatric hospitalization and jail time. "Psychosis" doesn't have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang's work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing. World-saving plans and rarity narratives Zoe cites the fact that Leverage has a "world-saving plan" (which included taking over the world) and considered Geoff Anders and Leverage to be extremely special, e.g. Geoff being possibly the best philosopher ever: Within a few months of joining, a supervisor I trusted who had recruited me confided in me privately, "I think there's good reason to believe Geoff is the best philosopher who's ever lived, better than Kant. I think his existence on earth right now is an historical event." Like Leverage, MIRI had a "world-saving plan". This is no secret; it's discussed in an Arbital article written by Eliezer Yudkowsky. Nate Soares frequently talked about how it was necessary to have a "plan" to make the entire future ok, to avert AI risk; this plan would need to "backchain" from a state of no AI risk and may, for example, say that we must create a human emulation using nanotechnology that is designed by a "genie" AI, which does a narrow task rather than taking responsibility for the entire future; this would allow the entire world to be taken over by a small group including the emulated human. I remember taking on more and more mental "responsibility" over time, noting the ways in which people other than me weren't sufficient to solve the AI alignment problem, and I had special skills, so it was uniquely my job to solve the problem. This ultimately broke down, and I found Ben Hoffman's post on responsibility to resonate (which discusses the issue of control-seeking). The decision theory of backchaining and taking over the world somewhat beyond the scope of this post. There are circumstances where back-chaining is appropriate, and "taking over the world" might be necessary, e.g. if there are existing actors already trying to take over the world and none of them would implement a satisfactory regime. However, there are obvious problems with multiple actors each attempting to control everything, which are discussed in Ben Hoffman's post. This connects with what Zoe calls "rarity narratives". There were definitely rarity narratives around MIRI/CFAR. Our task was to create an integrated, formal theory of values, decisions, epistemology, self-improvement, etc ("Friendliness theory"), which would help us develop Friendly AI faster than the rest of the world combined was developing AGI (which was, according to leaders, probably in less than 20 years). It was said that a large part of our advantage in doing this research so fast was that we were "actually trying" and others weren't. It was stated by multiple people that we wouldn't really have had a chance to save the world without Eliezer Yudkowsky (obviously implying that Eliezer was an extremely historically significant philosopher). Though I don't remember people saying explicitly that Eliezer Yudkowsky was a better philosopher than Kant, I would guess many would have said so. No one there, as far as I know, considered Kant worth learning from enough to actually read the Critique of Pure Reason in the course of their research; I only did so years later, and I'm relatively philosophically inclined. I would guess that MIRI people would consider a different set of philosophers relevant, e.g. would include Turing and Einstein as relevant "philosophers", and I don't have reason to believe they would consider Eliezer more relevant than these, though I'm not certain either way. (I think Eliezer is a world-historically-significant philosopher, though not as significant as Kant or Turing or Einstein.) I don't think it's helpful to oppose "rarity narratives" in general. People need to try to do hard things sometimes, and actually accomplishing those things would make the people in question special, and that isn't a good argument against trying the thing at all. Intellectual groups with high information integrity, e.g. early quantum mechanics people, can have a large effect on history. I currently think the intellectual work I do is pretty rare and important, so I have a "rarity narrative" about myself, even though I don't usually promote it. Of course, a project claiming specialness while displaying low information integrity is, effectively, asking for more control and resources that it can beneficially use. Rarity narratives can have the effects of making a group of people more insular, more concentrating relevance around itself and not learning from other sources (in the past or the present), making local social dynamics be more centered on a small number of special people, and increasing pressure on people to try to do (or pretend to try to do) things beyond their actual abilities; Zoe and I both experienced these effects. (As a hint to evaluating rarity narratives yourself: compare Great Thinker's public output to what you've learned from other public sources; follow citations and see where Great Thinker might be getting their ideas from; read canonical great philosophy and literature; get a quantitative sense of how much insight is coming from which places throughout spacetime.) The object-level specifics of each case of world-saving plan matter, of course; I think most readers of this post will be more familiar with MIRI's world-saving plan, especially since Zoe's post provides few object-level details about the content of Leverage's plan. Debugging Rarity ties into debugging; if what makes us different is that we're Actually Trying and the other AI research organizations aren't, then we're making a special psychological claim about ourselves, that we can detect the difference between actually and not-actually trying, and cause our minds to actually try more of the time. Zoe asks whether debugging was "required"; she notes: The explicit strategy for world-saving depended upon a team of highly moldable young people self-transforming into Elon Musks. I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes". This part of the plan was the same [EDIT: Anna clarifies that, while some people becoming like Elon Musk was some people's plan, there was usually acceptance of people not changing themselves; this might to some degree apply to Leverage as well]. Self-improvement was a major focus around MIRI and CFAR, and at other EA orgs. It often used standard CFAR techniques, which were taught at workshops. It was considered important to psychologically self-improve to the point of being able to solve extremely hard, future-lightcone-determining problems. I don't think these are bad techniques, for the most part. I think I learned a lot by observing and experimenting on my own mental processes. (Zoe isn't saying Leverage's techniques are bad either, just that you could get most of them from elsewhere.) Zoe notes a hierarchical structure where people debugged people they had power over: Trainers were often doing vulnerable, deep psychological work with people with whom they also lived, made funding decisions about, or relied on for friendship. Sometimes people debugged each other symmetrically, but mostly there was a hierarchical, asymmetric structure of vulnerability; underlings debugged those lower than them on the totem pole, never their superiors, and superiors did debugging with other superiors. This was also the case around MIRI and CFAR. A lot of debugging was done by Anna Salamon, head of CFAR at the time; Ben Hoffman noted that "every conversation with Anna turns into an Anna-debugging-you conversation", which resonated with me and others. There was certainly a power dynamic of "who can debug who"; to be a more advanced psychologist is to be offering therapy to others, being able to point out when they're being "defensive", when one wouldn't accept the same from them. This power dynamic is also present in normal therapy, although the profession has norms such as only getting therapy from strangers, which change the situation. How beneficial or harmful this was depends on the details. I heard that "political" discussions at CFAR (e.g. determining how to resolve conflicts between people at the organization, which could result in people leaving the organization) were mixed with "debugging" conversations, in a way that would make it hard for people to focus primarily on the debugged person's mental progress without imposing pre-determined conclusions. Unfortunately, when there are few people with high psychological aptitude around, it's hard to avoid "debugging" conversations having political power dynamics, although it's likely that the problem could have been mitigated. It was really common for people in the social space, including me, to have a theory about how other people are broken, and how to fix them, by getting them to understand a deep principle you do and they don't. I still think most people are broken and don't understand deep principles that I or some others do, so I don't think this was wrong, although I would now approach these conversations differently. A lot of the language from Zoe's post, e.g. "help them become a master", resonates. There was an atmosphere of psycho-spiritual development, often involving Kegan stages. There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy [EDIT: see Duncan's comment estimating that the actual amount of interaction between CFAR and MAPLE was pretty low even though there was some overlap in people]. Although I wasn't directly financially encouraged to debug people, I infer that CFAR employees were, since instructing people was part of their job description. Other issues MIRI did have less time pressure imposed by the organization itself than Leverage did, despite the deadline implied by the AGI timeline; I had no issues with absurdly over-booked calendars. I vaguely recall that CFAR employees were overworked especially around workshop times, though I'm pretty uncertain of the details. Many people's social lives, including mine, were spent mostly "in the community"; much of this time was spent on "debugging" and other psychological work. Some of my most important friendships at the time, including one with a housemate, were formed largely around a shared interest in psychological self-improvement. There was, therefore, relatively little work-life separation (which has upsides as well as downsides). Zoe recounts an experience with having unclear, shifting standards applied, with the fear of ostracism. Though the details of my experience are quite different, I was definitely afraid of being considered "crazy" and marginalized for having philosophy ideas that were too weird, even though weird philosophy would be necessary to solve the AI alignment problem. I noticed more people saying I and others were crazy as we were exploring sociological hypotheses that implied large problems with the social landscape we were in (e.g. people thought Ben Hoffman was crazy because of his criticisms of effective altruism). I recall talking to a former CFAR employee who was scapegoated and ousted after failing to appeal to the winning internal coalition; he was obviously quite paranoid and distrustful, and another friend and I agreed that he showed PTSD symptoms [EDIT: I infer scapegoating based on the public reason given being suspicious/ insufficient; someone at CFAR points out that this person was paranoid and distrustful while first working at CFAR as well]. Like Zoe, I experienced myself and others being distanced from old family and friends, who didn't understand how high-impact the work we were doing was. Since leaving the scene, I am more able to talk with normal people (including random strangers), although it's still hard to talk about why I expect the work I do to be high-impact. An ex-Leverage person I know comments that "one of the things I give Geoff the most credit for is actually ending the group when he realized he had gotten in over his head. That still left people hurt and shocked, but did actually stop a lot of the compounding harm." (While Geoff is still working on a project called "Leverage", the initial "Leverage 1.0" ended with most of the people leaving.) This is to some degree happening with MIRI and CFAR, with a change in the narrative about the organizations and their plans, although the details are currently less legible than with Leverage. Conclusion Perhaps one lesson to take from Zoe's account of Leverage is that spending relatively more time discussing sociology (including anthropology and history), and less time discussing psychology, is more likely to realize benefits while avoiding problems. Sociology is less inherently subjective and meta than psychology, having intersubjectively measurable properties such as events in human lifetimes and social network graph structures. My own thinking has certainly gone in this direction since my time at MIRI, to great benefit. I hope this account I have written helps others to understand the sociology of the rationality community around 2017, and that this understanding helps people to understand other parts of the society they live in. There are, obviously from what I have written, many correspondences, showing a common pattern for high-ambition ideological groups in the San Francisco Bay Area. I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don't know the specifics as well. EAs generally think that the vast majority of charities are doing low-value and/or fake work. I also know that San Francisco startup culture produces cult-like structures (and associated mental health symptoms) with regularity. It seems more productive to, rather than singling out specific parties, think about the social and ecological forces that create and select for the social structures we actually see, which include relatively more and less cult-like structures. (Of course, to the extent that harm is ongoing due to actions taken by people and organizations, it's important to be able to talk about that.) It's possible that after reading this, you think this wasn't that bad. Though I can only speak for myself here, I'm not sad that I went to work at MIRI instead of Google or academia after college. I don't have reason to believe that either of these environments would have been better for my overall intellectual well-being or my career, despite the mental and social problems that resulted from the path I chose. Scott Aaronson, for example, blogs about "blank faced" non-self-explaining authoritarian bureaucrats being a constant problem in academia. Venkatesh Rao writes about the corporate world, and the picture presented is one of a simulation constantly maintained thorough improv. I did grow from the experience in the end. But I did so in large part by being very painfully aware of the ways in which it was bad. I hope that those that think this is "not that bad" (perhaps due to knowing object-level specifics around MIRI/CFAR justifying these decisions) consider how they would find out whether the situation with Leverage was "not that bad", in comparison, given the similarity of the phenomena observed in both cases; such an investigation may involve learning object-level specifics about what happened at Leverage. I hope that people don't scapegoat; in an environment where certain actions are knowingly being taken by multiple parties, singling out certain parties has negative effects on people's willingness to speak without actually producing any justice. Aside from whether things were "bad" or "not that bad" overall, understanding the specifics of what happened, including harms to specific people, is important for actually accomplishing the ambitious goals these projects are aiming at; there is no reason to expect extreme accomplishments to result without very high levels of epistemic honesty. Center for Applied Rationality (CFAR)9Machine Intelligence Research Institute (MIRI)8Leverage Research7 Personal Blog 73 373 comments, sorted by top scoring Highlighting new comments since Today at 10:59 PM New Comment Submit Some comments are truncated due to high volume. ([?]F to expand all) Change truncation settings [-]Scott Alexander1d 277 I want to add some context I think is important to this. Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs. Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don't think he thinks they're worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it's especially galling that they're just as bad). Since then, he's tried to "jailbreak" a lot of people associated with MIRI and CFAR - again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combinat... (read more) Reply [-]devi6h 60 Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record. In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went... (read more) Reply [-]Zack_M_Davis20h 55 I talked and corresponded with Michael a lot during 2017-2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don't think you're being fair. "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself) I'm confident this is only a Ziz-ism: I don't recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him. again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs [...] describing how it was a Vassar-related phenomenon I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea giv... (read more) Reply [-]Scott Alexander12h 60 I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their "it's correct to be freaking about learning your entire society is corrupt and gaslighting" shtick. Reply [-]Scott Alexander12h 47 I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.) [...] Michael is a charismatic guy who has strong views and argues forcefully for them. That's not the same thing as having mysterious mind powers to "make people paranoid" or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I'm sure he'd be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument. I more or les... (read more) Reply 3Viliam6h...and then pushing them. So, this seems deliberate. He is not even hiding it, if you listen carefully. -1ChristianKl15hRumor has it that https://www.sfgate.com/news/bayarea /article/Man-Gets-5-Years-For-Attacking-Woman-Outside-13796663.php [https://www.sfgate.com/news/bayarea/article/ Man-Gets-5-Years-For-Attacking-Woman-Outside-13796663.php] is due to Vassar recommended drugs. In the OP that case does get blamed on CFAR's enviroment without any mentioning of that part. When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I'd love whether anyone who's nearer can confirm/deny the rumor and fill in missing pieces. [-]Andrew Rettek10h 43 As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it). Reply [-]AnnaSalamon9h 37 I, too, asked people questions after that incident and failed to locate any evidence of drugs. Reply [-]jessicata9h 47 I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he's "causing psychotic breaks" and "jailbreaking people" through conversation, "that listening too much to Vassar [causes psychosis], predictably") isn't obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others. Reply [-]Scott Alexander3h 33 Yes, I agree with you that all of this is very awkward. I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it. But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When we use the word "cult", we're implicitly agreeing that this doesn't always work, and we're bringing in creepier and less comprehensible ideas like "charisma" and "brainwashing" and "cognitive dissonance". (and the same thing with the concept of "emotionally abusive relationship") I don't want to call the Vassarites a cult because I'm sure someone will confront me with a Cult Checklist that they don't meet, but I think that it's not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it's weird that you can get tha... (read more) Reply [-]nshepperd21h 45 I don't think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people. Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do. Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I've heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn't detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn't've. This is really, really serious. If this happened to someone closer to me I'd be out for blood, and probably legal prosecution. Let's not minimize how fucked up this is. Reply [-]jessicata21h 24 Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer's writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz's sentence you quoted doesn't implicate Michael in any crimes. The sentence is also misleading given Devi didn't detransition afaik. Reply [-]Viliam13h 59 Jessicata, I will be blunt here. This article you wrote was [EDIT: expletive deleted] misleading. Perhaps you didn't do it on purpose; perhaps this is what you actually believe. But from my perspective, you are an unreliable narrator. Your story, original version: * I worked for MIRI/CFAR * I had a psychotic breakdown, and I believed I was super evil * the same thing also happened to a few other people * conclusion: MIRI/CFAR is responsible for all this Your story, updated version: * I worked for MIRI/CFAR * then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil * I actually used the drugs * I had a psychotic breakdown, and I believed I was super evil * the same thing also happened to a few other people * conclusion: I still blame MIRI/CFAR, and I am trying to downplay Vassar's role in this If you can't see how these two stories differ, then... I don't have sufficiently polite words to describe it, so let's just say that to me these two stories seem very different. Lest you accuse me of gaslighting, let me remind you that I am not doubting any of the factual statements you made. (I actually tried to... (read more) Reply [-]Eliezer Yudkowsky7h 53 I could be very wrong, but the story I currently have about this myself is that Vassar himself was a different and saner person before he tried some psychedelics. :( :( :( Reply 8Viliam5hThat... must have hurt a lot. (I hope your story is right.) [-]Unreal10h 27 Where did jessicata corroborate this sentence "then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil" ? Reply [-]countingtoten11h 21 I should note that, as an outsider, the main point I recall Eliezer making in that vein is that he used Michael Vassar as a model for the character who was called Professor Quirrell. As an outsider, I didn't see that as an unqualified endorsement - though I think your general message should be signal-boosted. Reply 1ChristianKl9hThe claim that Michael Vassar is substantially like Quirrell seems to me strange. Where did you get the claim that Eliezer modelled Vassar after Quirrell? To make the claim a bit more based on public data, take Vassar's TedX [https://www.youtube.com/ watch?v=W5UAOK1bk74&t=5s]talk. I think it gives a good impression of how Vassar thinks. There are some official statistics that claim for Jordan that life expectancy, so I think there's a good chance that Vassar here actually believes what he says. If you however look deeper then Jordan's life expectancy is not as high as is asserted by Vassar. Given that the video is in the public record that's an error that everybody can find who tries to check what Vassar is saying. I don't think it's in Vassar's interest to give a public talk like that with claims that are easily found to be wrong by factchecking. Quirrell wouldn't have made an error like this but is a lot more controlled. Eliezer made Vassar president of the precursor of MIRI. That's a strong signal of trust and endorsement. [-]countingtoten8h 21 https://yudkowsky.tumblr.com/writing/empathyrespect Reply [-]Davis_Kingsley8h 17 Eliezer has openly said Quirrell's cynicism is modeled after a mix of Michael Vassar and Robin Hanson. Reply [-]TekhneMakre12h 19 you publicly describe your suffering as a way to show people that MIRI/CFAR is evil. Could you expand more on this? E.g. what are a couple sentences in the post that seem most trying to show this. Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for. I appreciate the thrust of your comment, including this sentence, but also this sentence seems uncharitable, like it's collapsing down stuff that shouldn't be collapsed. For example, it could be that the MIRI/CFAR/etc. social field could set up (maybe by accident, or even due to no fault of any of the "central" people) the conditions where "psychosis" is the best of the bad available options; in which case it makes sense to attribute causal fault to the social field, not to a person who e.g. makes that clear to you, and therefore more proximal causes your breakdown. (Of course there's disagreement about whether that's the state of the world, but it's not necessarily incoherent.) I do get the sense that jessicata is relating in a funny way to Michael Vassar, e.g. by warping the narrative around him while selectively posing as "just trying to state facts" in relation to other narrative fields; but this is hard to tell, since it's also what it might look like if Michael Vassar was systematically scapegoated, and jessicata is reporting more direct/accurate (hence less bad-seeming) observations. Reply [-]jessicata9h 13 But from my perspective, you are an unreliable narrator. I appreciate you're telling me this given that you believe it. I definitely am in some ways, and try to improve over time. then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil I said in the text that (a) there were conversations about corruption in EA institutions, including about the content of Ben Hoffman's posts, (b) I was collaborating with Michael Vassar at the time, (c) Michael Vassar was commenting about social epistemology. I admit that connecting points (a) and (c) would have made the connection clearer, but it wouldn't have changed the text much. In cases where someone was previously part of a "cult" and later says it was a "cult" and abusive in some important ways, there has to be a stage where they're thinking about how bad the social context was, and practically always, that involves conversations with other people who are encouraging them to look at the ways their social context is bad. So my having conversations where people try to convince me CFAR/ MIRI are evil is expected given what el... (read more) Reply 0nshepperd20hWhat I'm saying is that the Berkeley community should be. Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be. [-]jessicata20h 25 I'm not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA. It doesn't make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it's happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the OP, as I was becoming more psychotic, people tried things they thought might help, which generally didn't, and they could have done better things instead. Even causal responsibility doesn't imply blame, e.g. Eliezer had some causal responsibility due to writing things that attracted people to the Berkeley scene where there were higher-variance psychological outcomes. Michael was often talking with people who were already "not ok" in important ways, which probably affects the statistics. Reply 6devi6hPlease see my comment on the grandparent [https:// www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/ my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId= Cp9sXekqpcMpFEtDk] . I agree with Jessica's general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person. [-]jessicata1d 34 I feel pretty defensive reading and responding to this comment, given a previous conversation with Scott Alexander where he said his professional opinion would be that people who have had a psychotic break should be on antipsychotics for the rest of their life (to minimize risks of future psychotic breaks). This has known severe side effects like cognitive impairment and brain shrinkage and lacks evidence of causing long-term improvement. When I was on antipsychotics, my mental functioning was much lower (noted by my friends) and I gained weight rapidly. (I don't think short-term use of antipsychotics was bad, in my case) It is in this context that I'm reading that someone talking about the possibility of mental subprocess implantation ("demons") should be "treated as a psychological emergency", when the Eric Bryulant case had already happened, and talking about the psychological processes was necessary for making sense of the situation. I feared involuntary institutionalization at the time, quite a lot, for reasons like this. If someone expresses opinions like this, and I have reason to believe they would act on them, then I can't believe myself to have freedom of speech. That ... (read more) Reply [-]Scott Alexander1d 73 I don't remember the exact words in our last conversation. If I said that, I was wrong and I apologize. My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient's buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn't want it we would explore why given the very high risk level, and if they still said they didn't want it then I would follow their direction. I didn't get a chance to talk to you during your episode, so I don't know exactly what was going on. I do think that psychosis should be thought of differently than just "weird thoughts that might be true", as more of a whole-body n... (read more) Reply [-]jessicata1d 44 I don't remember the exact words in our last conversation. If I said that, I was wrong and I apologize. Ok, the opinions you've described here seem much more reasonable than what I remember, thanks for clarifying. I do think that psychosis should be thought of differently than just "weird thoughts that might be true", since it's a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom. I agree, yes. I think what I was afraid of at the time was being called crazy and possibly institutionalized for thinking somewhat weird thoughts that people would refuse to engage with, and showing some signs of anxiety/distress that were in some ways a reaction to my actual situation. By the time I was losing sleep etc, things were quite different at a physiological level and it made sense to treat the situation as a psychiatric emergency. If you can show someone that they're making errors that correspond to symptoms of mild psychosis, then telling them that and suggesting corresponding therapies to help with the underlying problem seems pretty reasonable. Reply [-]Scott Alexander1d 36 Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree. Reply [-]hg0013h 11 I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom. If psychosis is caused by an underlying physiological/biochemical process, wouldn't that suggest that e.g. exposure to Leverage Research wouldn't be a cause of it? If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other? I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that's true, I'd expect changing someone's environment to be more helpful for the former sort of case. Reply 9TekhneMakre1d[probably old-hat [ETA: or false], but I'm still curious what you think] My (background unexamined) model of psychosis-> schizophrenia is that something, call it the "triggers", sets a person on a trajectory of less coherence / grounding; if the trajectory isn't corrected, they just go further and further. The "triggers" might be multifarious; there might be "organic" psychosis and "psychic" psychosis, where the former is like what happens from lead poisoning, and the latter is, maybe, what happens when you begin to become aware of some horrible facts. If your brain can rearrange itself quickly enough to cope with the newly known reality, your trajectory points back to the ground. If it can't, you might have a chain reaction where (1) horrible facts you were previously carefully ignoring, are revealed because you no longer have the superstructure that was ignore-coping with them; (2) your ungroundedness opens the way to unepistemic beliefs, some of which might be additionally horrifying if true; (3) you're generally stressed out because things are going wronger and wronger, which reinforces everything. If this is true, then your statement: is only true for some values of "guide them back to reality-based thoughts". If you're trying to help them go back to ignore-coping, you might partly succeed, but not in a stable way, because you only pushed the ball partway back up the hill, to mix metaphors--the ball is still on a slope and will roll back down when you stop pushing, the horrible fact is still revealed and will keeping being horrifying. But there's other things you could do, like helping them find a non-ignore-cope for the fact; or show them enough that they become convinced that the belief isn't true. 5Rafael Harth1dThere is this basic idea (I think from an old blogpost that Eliezer wrote) that if someone says there are goblins in the closet, dismissing them outright is confusing rationality with trust in commonly held claims, whereas the truly rational thing is to just open the closet and look. I think this is correct in principle but not applicable in many real-world cases. The real reason why even rational people routinely dismiss many weird explanations for things isn't that they have sufficient evidence against them, it's that the weird explanation is inconsistent with a large set of high confidence beliefs that they currently hold. If someone tells me that they can talk to their deceased parents, I'm probably not going to invest the time to test whether they can obtain novel information this way; I'm just going to assume they're delusional because I'm confident spirits don't exist. That said, if that someone helped write the logical induction paper, I personally would probably hear them out regardless of how weird the thing sounds. Nonetheless, I think it remains true that dismissing beliefs without considering the evidence is often necessary in practice. 9TekhneMakre1dThis is failing to track ambiguity in what's being refered to. If there's something confusing happening--something that seems important or interesting, but that you don't yet have words to well-articulate it--then you try to say what you can (e.g. by talking about "demons"). In your scenario, you don't know exactly what you're dismissing. You can confidently dismiss, in the absence of extraordinary evidence, that (1) their parents's brains have been rotting in the ground, and (2) they are talking with their parents, in the same way you talk to a present friend; you can't confidently dismiss, for example, that they are, from their conscious perspective, gaining information by conversing with an entity that's naturally thought of as their parents (which we might later describe as, they have separate structure in them, not integrated with their "self", that encoded thought patterns from their parents, blah blah blah etc.). You can say "oh well yes of course if it's *just a metaphor* maybe I don't want to dismiss them", but the point is that from a partially pre-theoretic confusion, it's not clear what's a metaphor and it requires further work to disambiguate what's a metaphor. 3CronoDAS16hAs the joke goes, there's nothing crazy about talking to dead people. When dead people respond, then you start worrying. [-]Desrtopa21h 30 So, it's been a long time since I actually commented on Less Wrong, but since the conversation is here... Hearing about this is weird for me, because I feel like, compared to the opinions I heard about him from other people in the community, I kind of... always had uncomfortable feelings about Mike Vassar? And I say this without having had direct personal contact with him except, IIRC, maybe one meetup I attended where he was there and we didn't talk directly, although we did occasionally participate in some of the same conversations online. By all accounts, it sounds like he's always been quite charismatic in person, and this isn't the first time I've heard someone describe him as a "wizard." But empirically, there are some people who're very charismatic who propagate some really bad ideas and whose impacts on the lives of people around them, or on society at large, can be quite negative. As of last I was paying attention to him, I wouldn't have expected Mike Vassar to have that negative an effect on the lives of the people around him, but I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought ... (read more) Reply [-]Vanessa Kosoy10h 40 I met Vassar once. He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd. Something about his delivery was so captivating, that it took me a while to "shake off the fairy dust" and realize just how silly some of his claims were, even when it should have been obvious from the start. Moreover, his worldview seemed heavily based on paranoidal / conspiracy-theory type of thinking. So, yes, I'm not too surprised by Scott's revelations about him. Reply 8Viliam13hHeh, the same feeling here. I didn't have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn't reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed. Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it's all gibberish to me. Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing, or a bad thing. Hypothesis 3: He gives high-variance advice. Some of it amazingly good, some of it horribly wrong. When people take him seriously, some of them benefit greatly, others suffer. Those who benefitted will tell the story. (Those who suffered will leave the community.) My probability distribution was gradually shifting from 1 to 3. 3Avi12hI haven't seen/heard anything particularly impressive from him either, but perhaps his 'best work' just isn't written down anywhere? 4CronoDAS16hMy impression as an outsider (I met him once and heard and read some things people were saying about him) was that he seemed smart but also seemed like kind of a kook... [-]Scott Alexander2h 24 I've posted an edit/update above after talking to Vassar. Reply [-]gwern21h 18 A question for the 'Vassarites', if they will: were you doing anything like the "unihemispheric sleep" exercise (self-inducing hallucinations/dissociative personalities by sleep deprivation) the Zizians are described as doing? Reply [-]jessicata21h 33 No. All sleep deprivation was unintentional (anxiety-induced in my case). Reply [-]ChristianKl15h 12 I banned him from SSC meetups for a combination of reasons including these If you make bans like these it would be worth to communicate them to the people organizing SSC meetups. Especially, when making bans for safety reasons of meetup participants not communicating those bans seems very strange to me. Vassar lived a while after he left the Bay Area in Berlin and for decisions whether or not to make an effort to integrate someone like him (and invite him to LW and SSC meetups) such kind of information is valuable and Bay people not sharing it but claiming that they do anything that would work in practice like a ban feels misleading. For reasons I don't fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything's kind of been frozen in place since then. I think Vassar left the Bay area more then a year before COVID happened. As far as I remember his stated reasoning was something along the lines of everyone in the Bay Area getting mindkilled by leftish ideology. Reply [-]Scott Alexander12h 16 It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn't publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation. Reply 6ChristianKl12hIf there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I'm not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban. I don't think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren't welcome. 2ChristianKl31mhttps://www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/ michael-vassar-at-the-slatestarcodex-online-meetup [https:// www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/ michael-vassar-at-the-slatestarcodex-online-meetup] seems to have happened after that point in time. Vassar not only attended a Slate Star Codex but was central in it and presenting his thoughts. -3Viliam12hIt might be useful to have a global blacklist somewhere. Possible legal consequences, if someone decides to sue you for libel. (Perhaps the list should only contain the names, not the reasons?) EDIT: Nevermind. There are more things I would like to say about this, but this is not the right place. Later I may write a separate article explaining the threat model I had in mind. 4ChristianKl8hLegal threats matter a great deal for what can be done in a situation like this. When it comes to a "global blacklist" there's the question about governance. Who decides who's on and who isn't. When it comes to SSC or ACX meetups the governance question is clear. Anybody who's organizing a meetup under those labels should follow Scott's guidance. That however only works if that information is communicated to meetup organizers. [-]Aella1d 192 I find something in me really revolts at this post, so epistemic status... not-fully-thought-through-emotions-are-in-charge? Full disclosure: I am good friends with Zoe; I lived with her for the four months leading up to her post, and was present to witness a lot of her processing and pain. I'm also currently dating someone named in this post, but my reaction to this was formed before talking with him. First, I'm annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away. If the points in the post felt more compelling, then I'd probably be more down for an argument of "we should bin these together and look at this as a whole", but as it stands the stuff listed in here feels like it's describing something significantly less damaging, and of a different kind of damage. I'm also annoyed that this post relies so heavily on Zoe's, and the comparison feels like it cheapens what Zoe went through. I keep having a recurring thought that the author must have utterly failed to understand the intensity of the very direct impact from Leverage's operations on Zoe. Mo... (read more) Reply [-]Eliezer Yudkowsky1d 125 By way of narrowing down this sense, which I think I share, if it's the same sense: leaving out the information from Scott's comment about a MIRI-opposed person who is advocating psychedelic use and causing psychotic breaks in people, and particularly this person talks about MIRI's attempts to have any internal info compartments as a terrible dark symptom of greater social control that you need to jailbreak away from using psychedelics, and then those people have psychotic breaks - leaving out this info seems to be not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics. It's taking the Leverage affair and trying to use it to make a point, and only including the info that would make that point, and leaving out info that would distract from that point. And I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it. And it's also okay for somebody to think that the original Leverage affair needed to be discussed on its own terms, and not be carefully reframed in exactly the right way to make a point about a higher-profile group the author... (read more) Reply [-]Benquo1d 17 not something you'd do in a neutrally intended post written from a place of grave concern about community dynamics I'm not going to posture like that's terribly bad inhuman behavior, but we can see it and it's okay to admit to ourselves that we see it These have the tone of allusions to some sort of accusation, but as far as I can tell you're not actually accusing Jessica of any transgression here, just saying that her post was not "neutrally intended," which - what would that mean? A post where Gricean implicature was not relevant? Can you clarify whether you meant to suggest Jessica was doing some specific harmful thing here or whether this tone is unendorsed? Reply [-]Eliezer Yudkowsky20h 50 Okay, sure. If what Scott says is true, and it matches my recollections of things I heard earlier - though I can attest to very little of it of my direct observation - then it seems like this post was written with knowledge of things that would make the overall story arc it showed, look very different, and those things were deliberately omitted. This is more manipulation than I myself would personally consider okay to use in a situation like this one, though I am ever mindful of Automatic Norms and the privilege of being more verbally facile than others in which facts I can include but still make my own points. Reply 5jessicata20hSee Zack's reply here [https://www.lesswrong.com/posts/ MnFqyPLqbiKL8nSR7/ my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId= GzqsWxEp8uLcZinTy] and mine here [https://www.lesswrong.com/posts/ MnFqyPLqbiKL8nSR7/ my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId= t8AAzuukWxmDDkAvW] . Overall I didn't think the amount of responsibility was high enough for this to be worth mentioning. [-]Ruby1d 69 First, I'm annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away... I want to second this reaction (basically your entire second paragraph). I have been feeling the same but hadn't worked up the courage to say it. Reply [-]Viliam12h 15 For a moment I actually wondered whether this was a genius-level move by Leverage, but then I decided that I am just being paranoid. But it did derail the previous debate successfully. On the positive side, I learned some new things. Never heard about Ziz before, for example. EDIT: Okay, this is probably silly, but... there is no connection between the Vassarites and Leverage, right? I just realized that my level of ignorance does not justify me dismissing a hypothesis so quickly. And of course, everyone knows everyone, but there are different levels of "knowing people", and... you know what I mean, hopefully. I will defer to judgment of people from Bay Area about this topic. Reply [-]Eliezer Yudkowsky1d 14 +2. Reply [-]Freyja6h 12 I am also mad at what I see to be piggybacking on Zoe's post, downplaying of the harms described in her post, and a subtle redirection of collective attention away from potentially new, timid accounts of things that happened to a specific group of people within Leverage and seem to have a lot of difficulty talking about it. I hope that the sustained collective attention required to witness, make sense of and address the accounts of harm coming out of the psychology division of Leverage doesn't get lost as a result of this post being published when it was. Reply [-]romeostevensit16h 34 One of the things that can feel like gaslighting in a community that attracts highly scrupulous people is when posting about your interpretation of your experience is treated as a contractual obligation to defend the claims and discuss any possible misinterpretations or consequences of what is a challenging thing to write in the first place. Reply [-]Aella5h 26 I feel like here and in so many other comments in this discussion that there's important and subtle distinctions that are being missed. I don't have any intention to conditionlessly accept and support all accusations made (I have seen false accusations cause incredible harm and suicidality in people close to me). I do expect people who make serious claims about organizations to be careful about how they do it. I think Zoe's Leverage post easily met my standard, but that this post here triggered a lot of warning flags for me, and I find it important to pay attention to those. Reply [-]Duncan_Sabien16h 14 Speaking of highly scrupulous... I think that the phrases "treated as a contractual obligation" and "any possible misinterpretations or consequences" are both hyperbole, if they are (as they seem) intended as fair summaries or descriptions of what Aella wrote above. I think there's a skipped step here, where you're trying to say that what Aella wrote above might imply those things, or might result in those things, or might be tantamount to those things, but I think it's quite important to not miss that step. Before objecting to Aella's [A] by saying "[B] is bad!" I think one should justify or at least explicitly assert [A-->B] Reply 7romeostevensit7hYes, and to clarify I am not attempting to imply that there is something wrong with Aella's comment. It's more like this is a pattern I have observed and talked about with others. I don't think people playing a part in a pattern that has some negative side effects should necessarily have a responsibility frame around that, especially given that one literally can't track all various possible side effects of actions. I see epistemic statuses as partially attempting to give people more affordance for thinking about possible side effects of the multi context nature of online comms and that was used to good effect here, I likely would have had a more negative reaction to Aella's post if it hadn't included the epistemic status. [-]Ben Pace1d 30 One example for this is comparing Zoe's mention of someone at Leverage having a psychotic break to the author having a psychotic break. But Zoe's point was that Leverage treated the psychotic break as an achievement, not that the psychotic break happened. From the quotes in Scott's comment, it seems to me also the case that Michael Vassar also treated Jessica's and Ziz's psychoses as an achievement. Reply [-]Zack_M_Davis20h 88 it seems to me also the case that Michael Vassar also treated Jessica's [...] psycho[sis] as an achievement Objection: hearsay. How would Scott know this? (I wrote a separate reply about the ways in which I think Scott's comment is being unfair.) As some closer-to-the-source counterevidence against the "treating as an achievement" charge, I quote a 9 October 2017 2:13 p.m. Signal message in which Michael wrote to me: Up for coming by? I'd like to understand just how similar your situation was to Jessica's, including the details of her breakdown. We really don't want this happening so frequently. (Also, just, whatever you think of Michael's many faults, very few people are cartoon villains that want their friends to have mental breakdowns.) Reply [-]Ben Pace19h 19 Thanks for the counter-evidence. Reply [-]jessicata1d 24 The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away. I'm assuming that sensemaking is easier, rather than harder, with more relevant information and stories shared. I guess if it's pulling the spotlight away, it's partially because it's showing relevant facts about things other than Leverage, and partially because people will be more afraid of scapegoating Leverage if the similarities to MIRI/CFAR are obvious. I don't like scapegoating, so I don't really care if it's pulling the spotlight away for the second reason. If the points in the post felt more compelling, then I'd probably be more down for an argument of "we should bin these together and look at this as a whole", but as it stands the stuff listed in here feels like it's describing something significantly less damaging, and of a different kind of damage. I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paran... (read more) Reply [-]ChristianKl15h 37 I don't really understand what Zoe went through, just reading her post (although I have talked with other ex-Leverage people about the events). You don't understand what I went through, either. It was really, really psychologically disturbing. I sound paranoid writing what I wrote, but this paranoia affected so many people. It would have probably better if you would have focused on your experience and drop all of the talk about Zoe from this post. That would make it easier for the reader to just take the information value from your experience. I think that your post is still valuable information but that added narrative layer makes it harder to interact with then it would have been if it would have been focused more on your experience. Reply [-]hg0014h 19 The community still seems in the middle of sensemaking around Leverage Understanding how other parts of the community were similar/ dissimilar to Leverage seems valuable from a sensemaking point of view. Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions. I think you may be asking your reader to draw the conclusion that this is a dishonest way to write, without explicitly pointing out that conclusion :-) Personally, I see nothing wrong with presenting only observations. Reply [-]4thWayWastrel1d 15 I empathise with the feeling of slipperyness in the OP, I feel comfortable attributing that to the subject matter rather than malice. If I had an experience that matched zoe's to the degree jessicata's did (superficially or otherwise) I'd feel compelled to post it. I found it helpful in the question of whether "insular rationalist group gets weird and experiences rash of psychotic breaks" is a community problem, or just a problem with stray dude. Reply [-]Aella1d 29 Scott's comment does seem to verify the "insular rationalist group gets weird and experiences rash of psychotic breaks" trend, but it seems to be a different group than the one named in the original post. Reply [-]farp1d 14 First, I'm annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away. Yeesh. I don't think we should police victims' timing. That seems really evil to me. We should be super skeptical of any attempts to tell people to shut up about their allegations, and "your timing is very insensitive to the real victims" really does not pass the smell test for me. Reply [-]Aella1d 57 I don't think "don't police victims' timing" is an absolute rule; not policing the timing is a pretty good idea in most cases. I think this is an exception. And if I wasn't clear, I'll explicitly state my position here: I think it's good to pay close attention to negative effects communities have on its members, and I am very pro people talking about this, and if people feel hurt by an organization it seems really good to have this publicly discussed. But I believe the above post did not simply do that. It also did other things, which is frame things I perceive in misleading ways, leave out key information relevant to a discussion (as per Eliezer's comment here), and also rely very heavily directly on Zoe's account at Leverage to bring validity to their own claims when I perceive Leverage as have been being both significantly worse and worse in a different category of way. If the above post hadn't done these things, I don't think I would have any issue with the timing. Reply -11farp17h [-]Viliam10h 23 Some context, please. Imagine the following scenario: * Victim A: "I was hurt by X." * Victim B: "I was hurt by Y." There is absolutely nothing wrong with this, whether it happens the same day, the next day, or week later. Maybe victim B was encouraged by (reactions to) victim A's message, maybe it was just a coincidence. Nothing wrong with that either. Another scenario: * Victim A: "I was hurt by X." * Victim B: "I was also hurt by X (in a different way, on another day etc.)." This is a good thing to happen; more evidence, encouragement for further victims to come out. But this post is different in a few important ways. First, Jessicata piggybacks on Zoe's story a lot, insinuating analogies, but providing very little actual data. (If you rewrote the article to avoid referring to Zoe, it would be 10 times shorter.) Second, Jessicata repeatedly makes comparison between Zoe's experience at Leverage and her experience at MIRI/CFAR, and usually concludes that Leverage was less bad (for reasons that are weird to me, such as because their abuse was legible, or because they provided space for people to talk about demons and exorcise them). Here are some quotes: I want to disagree with a frame that says th ... (read more) Reply -27farp1d [-]Benquo1d 14 First, I'm annoyed at the timing of this. The community still seems in the middle of sensemaking around Leverage, and figuring out what to do about it, and this post feels like it pulls the spotlight away. If we're trying to solve problems rather than attack the bad people, then the boundaries of the discussion should be determined by the scope of the problem, not by which people we're saying are bad. If you're trying to attack the bad people without standards or a theory of what's going on, that's just mob violence. Reply [-]Aella1d 36 I... think I am trying to attack the bad people? I'm definitely conflict-oriented around Leverage; I believe that on some important level treating that organization or certain people in it as good-intentioned-but-misguided is a mistake, and a dangerous one. I don't think this is true for MIRI/CFAR; as is summed up pretty well in the last section of Orthonormal's post here. I'm down for the boundaries of the discussion being determined by the scope of the problem, but I perceive the original post here to be outside the scope of the problem. I'm also not sure how to engage with your last sentence. I do have theories for what is going on (but regardless I'm not sure if you give a mob a theory that makes it not a mob). Reply [-]Benquo5h 11 This is explicitly opposed to Zoe's stated intentions. Other people, including me and Jessica, also want to reveal and discuss bad behavior, but don't consent to violence in the name of our grievances. Agnes Callard's article is relevant here: I Don't Want You to 'Believe' Me. I Want You to Listen. We want to reveal problems so that people can try to understand and solve those problems. Transforming an attempt to discussion of abuse into a scapegoating movement silences victims, preventing others from trying to interpret and independently evaluate the content of what they are saying, simplifying it to a bid to make someone the enemy. Historically, the idea that instead of trying to figure out which behaviors are bad and police them, we need to try to quickly attack the bad people, is how we get Holocausts and Stalinist purges. In this case I don't see any upside. Reply [-]Aella4h 30 I perceive you as doing a conversational thing here that I don't like, where you like... imply things about my position without explicitly stating them? Or talk from a heavy frame that isn't explicit? 1. Which stated intentions? Where she asks people 'not to bother those who were there'? What thing do you think I want to do that Zoe doesn't want me to do? 2. Are you claiming I am advocating violence? Or simply implying it? 3. Are you trying to argue that I shouldn't be conflict oriented because Zoe doesn't want me to be? The last part feels a little weird for someone to tell me, as I'm good friends with Zoe and have talked with her extensively about this. 4. I support revealing problems so people can understand and solve them. I also don't like whatever is happening in this original article due to reasons you haven't engaged with. 5. You're saying transforming an attempt to discuss abuse into scapegoating silences victims, keeps other ppl from evaluating the content, and simplifies it a bid to make someone the enemy. But in the comment you were responding to, I was talking about Leverage, not the author of this post. I view Leverage and co. as bad actors, but you sort of... reframe it to make it sound like I'm using a conflict mindset towards Jessica? 6. You're also not engaging with the points I made, and you're responding to arguments I don't condone. I don't really view you as engaging in good faith at this point, so I'm precommitting not to respond to you after this. Reply 5[comment deleted]1d 4[comment deleted]5h [-]Ben Pace2d 162 Someone in the community told me that for me to think AGI probably won't be developed soon, I must think I'm better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness. Just zooming in on this, which stood out to me personally as a particular thing I'm really tired of. If you're not disagreeing with people about important things then you're not thinking. There are many options for how to negotiate a significant disagreement with a colleague, including spending lots of time arguing about it, finding a compromise action, or stopping collaborating with the person (if it's a severe disagreement, which often it can be). But telling someone that by disagreeing they're claiming to be 'better' than another person in some way always feels to me like an attempt to 'control' the speech and behavior of the person you're talking to, and I'm against it. It happens a lot. I recently overheard someone (who I'd not met before) telling Eliezer Yudkowsky that he's not allowed to have extreme beliefs about AGI outcomes. I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees w... (read more) Reply [-]Eliezer Yudkowsky1d 135 I affirm the correctness of Ben Pace's anecdote about what he recently heard someone tell me. "How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" - is somebody trolling? Have they never read anything I've written in my entire life? Do they have no sense, even, of irony? Yeah, sure, it's harder to be better at some things than me, sure, somebody might be skeptical about that, but then you ask for evidence or say "Good luck proving that to us all eventually!" You don't be like, "Do you think you're special?" What kind of bystander-killing argumentative superweapon is that? What else would it prove? I really don't know how I could make this any clearer. I wrote a small book whose second half was about not doing exactly this. I am left with a sense that I really went to some lengths to prevent this, I did what society demands of a person plus over 10,000% (most people never write any extended arguments against bad epistemology at all, and society doesn't hold that against them), I was not subtle. At some point I have to acknowledge that other human beings are their own people... (read more) Reply [-]jessicata1d 50 The irony was certainly not lost on me; I've edited the post to make this clearer to other readers. Reply [-]Benquo1d 11 I'm glad you agree that the behavior Jessica describes is explicitly opposed to the content of the Sequences, and that you clearly care a lot about this. I don't think anyone can reasonably claim you didn't try hard to get people to behave better, or could reasonably blame you for the fact that many people persistently try to do the opposite of what you say, in the name of Rationality. I do think it would be a very good idea for you to investigate why & how the institutions you helped build and are still actively participating in are optimizing against your explicitly stated intentions. Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it, unless you're actually checking. And MIRI/CFAR donors seem to for the most part think that you're aware of and endorse those orgs' activities. When Jessica and another recent MIRI employee asked a few years ago for some of your time to explain why they'd left, your response was: My guess is that I could talk over Signal voice for 30 minutes or in person for 15 minutes on the 15th, with an upcoming other commitment providing a definite cutoff poi ... (read more) Reply [-]habryka1d 58 Anna's endorsement of this post seems like reasonably strong confirmation that organizations nominally committed to your agenda are actually opposed to it, Presumably Eliezer's agenda is much broader than "make sure nobody tries to socially enforce deferral to high-status figures in an ungrounded way" though I do think this is part of his goals. The above seems to me like it tries to equivocate between "this is confirmation that at least some people don't act in full agreement with your agenda, despite being nominally committed to it" and "this is confirmation that people are actively working against your agenda". These two really don't strike me as the same, and I really don't like how this comment seems like it tries to equivocate between the two. Of course, the claim that some chunk of the community/organizations Eliezer created are working actively against some agenda that Eliezer tried to set for them is plausible. But calling the above a "strong confirmation" of this fact strikes me as a very substantial stretch. Reply [-]Benquo5h 11 It's explicitly opposition to core Sequences content, which Eliezer felt was important enough to write a whole additional philosophical dialogue about after the main Sequences were done. Eliezer's response when informed about it was: is somebody trolling? Have they never read anything I've written in my entire life? Do they have no sense, even, of irony? That doesn't seem like Eliezer agrees with you that someone got this wrong by accident, that seems like Eliezer agrees with me that someone identifying as a Rationalist has to be trying to get core things wrong to end up saying something like that. Reply 1lsusr1d"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" reads to me as something Eliezer Yudkowsky himself would never write. -39throwaway462378964h [-]Peter Wildeford1d 27 I don't recall the specific claim, just that EY's probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees with some other thoughtful people on that question, he shouldn't have such confidence. I think people conflate the very reasonable "I am not going to adopt your 95-99% range because other thoughtful people disagree and I have no particular reason to trust you massively more than I trust other people" with the different "the fact that other thoughtful people mean there's no way you could arrive at 95-99% confidence" which is false. I think thoughtful people disagreeing with you is decent evidence you are wrong but can still be outweighed. Reply [-]Eli Tyre19h 19 If you're not disagreeing with people about important things then you're not thinking. This is a great sentence. I kind of want it on a t-shirt. Reply [-]Alexander2d 12 I read this post with great interest because it touches on some crucial and sensitive themes. I sought a lesson we could learn from this situation, and your comment captured such a lesson well. This is reminiscent of the message of the Dune trilogy. Frank Herbert warns about society's tendencies to "give over every decision-making capacity" to a charismatic leader. Herbert said in 1979: The bottom line of the Dune trilogy is: beware of heroes. Much better rely on your own judgment, and your own mistakes. Reply 9Viliam1dIndeed. And if the people object against someone disagreeing with them, that would imply they are 100% certain about being right. On one hand, this suggests that the pressure to groupthink is strong. On the other hand, this is evidence of Eliezer not being treated as an infallible leader... which I suppose is a good news in this avalanche of bad news. (There is a method to reduce group pressure, by making everyone write their opinion first, and only then tell each other the opinions. Problem is, this stops working if you estimate the same thing repeatedly, because people already know what the group opinion was in the past.) [-]Rob Bensinger1d 136 Kate Donovan messaged me to say: I think four people experiencing psychosis in a period of five years, in a community this large with high rates of autism and drug use, is shockingly low relative to base rates. [...] A fast pass suggests that my 1-3% for lifetime prevalence was right, but mostly appearing at 15-35. And since we have conservatively 500 people in the cluster (a lot more people than that attended CFAR workshops or are in MIRI or CFAR's orbit), 4 is low. Given that I suspect the cluster is larger and I am pretty sure my numbers don't include drug induced psychosis, just primary psychosis. The base rate seems important to take into account here, though per Jessica, "Obviously, for every case of poor mental health that 'blows up' and is noted, there are many cases that aren't." (But I'd guess that's true for the base-rate stats too?) Reply [-]jessicata1d 26 This is a good point regarding the broader community. I do think that, given that at least 2 cases were former MIRI employees, there might be a higher rate in that subgroup. EDIT: It's also relevant that a lot of these cases happened in the same few years. 4 of the 5 cases of psychiatric hospitalization or jail time I know about happened in 2017, the other happened sometime 2017-2019. I think these people were in the 15-35 age range, which spans 20 years. Reply 6Gunnar_Zarncke11hSee also studies about base-rate here: https:// www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/ my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId= pHaq26ZrznpC7D5f4 [https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/ my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId= pHaq26ZrznpC7D5f4] [-]orthonormal1d 125 Thank you for writing this, Jessica. First, you've had some miserable experiences in the last several years, and regardless of everything else, those times sound terrifying and awful. You have my deep sympathy. Regardless of my seeing a large distinction between the Leverage situation and MIRI/CFAR, I agree with Jessica that this is a good time to revisit the safety of various orgs in the rationality/EA space. I almost perfectly overlapped with Jessica at MIRI from March 2015 to June 2017. (Yes, this uniquely identifies me. Don't use my actual name here anyway, please.) So I think I can speak to a great deal of this. I'll run down a summary of the specifics first (or at least, the specifics I know enough about to speak meaningfully), and then at the end discuss what I see overall. Claim: People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate. I think this is true; I believe I know two of the first cases to which Jessica refers; and I'm probably not plugged-in enough socially to know the others. And then there's the Ziz catastrophe. Claim: Eliezer and Nate updated sharply toward shorter timelines, other MIRI researchers... (read more) Reply [-]Gunnar_Zarncke1d 39 : People in and adjacent to MIRI/CFAR manifest major mental health problems, significantly more often than the background rate. I think this is true My main complaint about this and the Leverage post is the lack of base-rate data. How many people develop mental health problems in a) normal companies, b) startups, c) small non-profits, d) cults/sects? So far, all I have seen are two cases. And in the startups I have worked at, I would also have been able to find mental health cases that could be tied to the company narrative. Humans being human narratives get woven. And the internet being the internet, some will get blown out of proportion. That doesn't diminish the personal experience at all. I am updating only slightly on CFAR or MIRI. And basically not at all on "things look better from the outside than from the inside." Reply [-]habryka21h 77 In particular, I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed depression or anxiety (link). I think given the kind of undirected, often low-paid, work that many have been doing for the last decade, I think that's the right reference class to draw from, and my current guess is we are roughly at that same level, or slightly below it (which is a crazy high number, and I think should give us a lot of pause). Reply [-]Linch12h 39 I want to remind people here that something like 30-40% of grad students at top universities have either clinically diagnosed [emphasis mine] depression or anxiety (link) I'm confused about how you got to this conclusion, and think it is most likely false. Neither your link, the linked study, or the linked meta-analysis in the linked study of your link says this. Instead the abstract of the linked^3 meta-analysis says: Among 16 studies reporting the prevalence of clinically significant symptoms of depression across 23,469 Ph.D. students, the pooled estimate of the proportion of students with depression was 0.24 (95% confidence interval [CI], 0.18-0.31; I^2 = 98.75%). In a meta-analysis of the nine studies reporting the prevalence of clinically significant symptoms of anxiety across 15,626 students, the estimated proportion of students with anxiety was 0.17 (95% CI, 0.12-0.23; I^2 = 98.05%). Further, the discussion section of the linked^3 study emphasizes: While validated screening instruments tend to over-identify cases of depression (relative to structured clinical interviews) by approximately a factor of two^67^,^68, our findings nonetheless point to a major public health problem among Ph ... (read more) Reply 6Gunnar_Zarncke11hNote that the pooled prevalence is 24% (CI 18-31). But it differs a lot across studies, symptoms, and location. In the individual studies, the range is really from zero to 50% (or rather to 38% if you exclude a study with only 6 participants). I think a suitable reference class would be the University of California which has 3,190 participants and a prevalence of 38%. 9Linch2hSorry, am I misunderstanding something? I think taking "clinically significant symptoms", specific to the UC system, as a given is wrong because it did not directly address either of my two criticisms: 1. Clinically significant symptoms =/= clinically diagnosed even in worlds where there is a 1:1 relationship between clinically significant symptoms and would have been clinically diagnosed, as many people do not get diagnosed 2. Clinically significant symptoms do not have a 1:1 relationship with would have been clinically diagnosed. 2Gunnar_Zarncke32mWell, I agree that the actual prevalence you have in mind would be roughly half of 38% i.e. ~20%. That is still much higher than the 12% you arrived at. And either value is so high that there is little surprise some severe episodes of some people happened in a 5-year frame. 4habryka6hThe UC Berkeley study was the one that I had cached in my mind as generating this number. I will reread it later today to make sure that it's right, but it sure seems like the most relevant reference class, given the same physical location. 6Gunnar_Zarncke3hI had a look at the situation in Germany and it doesn't look much better. 17% of students are diagnosed with at least one psychical disorder. This is based on the health records of all students insured by one of the largest public health insurers in Germany (about ten percent of the population): https://www.barmer.de/ blob/144368/08f7b513fdb6f06703c6e9765ee9375f/data/ dl-barmer-arztreport-2018.pdf [https://www.barmer.de/blob/144368/ 08f7b513fdb6f06703c6e9765ee9375f/data/dl-barmer-arztreport-2018.pdf] 5habryka3hI feel like the paragraph you cited just seems like the straightforward explanation of where my belief comes from? 24% of people have depression, 17% have anxiety, resulting in something like 30%-40% having one or the other. I did not remember the section about the screening instruments over-identifying cases of depression/ anxiety by approximately a factor of two, which definitely cuts down my number, and I should have adjusted it in my above comment. I do think that factor of ~2 does maybe make me think that we are doing a bit worse than grad students, though I am not super sure. [-]Linch2h 15 Sorry, maybe this is too nitpicky, but clinically significant symptoms =/= clinically diagnosed, even in worlds where the clinically significant symptoms are severe enough to be diagnosed as such. If you instead said in "population studies 30-40% of graduate students have anxiety or depression severe enough to be clinically diagnosed as such were they to seek diagnosis" then I think this will be a normal misreading from not jumping through enough links. Put another way, if someone in mid-2020 told me that they had symptomatic covid and was formally diagnosed with covid, I would expect that they had worse symptoms than someone who said they had covid symptoms and later tested for covid antibodies. This is because jumping through the hoops to get a clinical diagnosis is nontrivial Bayesian evidence of severity and not just certainty, under most circumstances, and especially when testing is limited and/or gatekeeped (which is true for many parts of the world for covid in 2020, and is usually true in the US for mental health). Reply 6habryka2hAh, sorry, yes. Me being unclear on that was also bad. The phrasing you give is the one I intended to convey, though I sure didn't do it. 1Linch2hThanks, appreciate the update! [-]orthonormal1d 39 Additionally, as a canary statement: I was also never asked to sign an NDA. Reply [-]Vaniver1d 15 I think CFAR would be better off if Anna delegated hiring to someone else. I think Pete did (most of?) the hiring as soon as he became ED, so I think this has been the state of CFAR for a while (while I think Anna has also been able to hire people she wanted to hire). Reply 8PeteMichaud16hIt's always been a somewhat group-involved process, but yes, I was primarily responsible for hiring for roughly 2016 through the end of 2017, then it would have been Tim. But again, it's a small org and always involved some involvement of the whole group. 8Eli Tyre15hWithout denying that it is a small org and staff usually have some input over hiring, that input is usually informal. My understanding is that in the period when Anna was ED, there was an explicit all-staff discussion when they were considering a hire (after the person had done a trial?). In the Pete era, I'm sure Pete asked for staff members' opinions, and if (for instance) I sent him an email with my thoughts on a potential hire, he would take that info into account, but there was not institutional group meeting. [-]Vanessa Kosoy1d 12 if one believed somebody else were just as capable of causing AI to be Friendly, clearly one should join their project instead of starting one's own. Nitpicking: there are reasons to have multiple projects, for example it's convenient to be in the same geographic location but not anyone can relocate to any place. Reply 4orthonormal1dSure - and MIRI/FHI are a decent complement to each other, the latter providing a respectable academic face to weird ideas. Generally though, it's far more productive to have ten top researchers in the same org rather than having five orgs each with two top researchers and a couple of others to round them out. Geography is a secondary concern to that. 4Vanessa Kosoy1dA "secondary concern" in the sense that, we should work remotely? Or in the sense that everyone should relocate? Because the latter is unrealistic: people have families, friends, communities, not anyone can uproot themself. 8orthonormal1dA secondary concern in that it's better to have one org that has some people in different locations, but everyone communicating heavily, than to have two separate organizations. 4Davidmanheim16hI think this is much more complex than you're assuming. As a sketch of why, costs of communication scale poorly, and the benefits of being small and coordinating centrally often beats the costs imposed by needing to run everything as one organization. (This is why people advise startups to outsource non-central work.) 2Vanessa Kosoy14hThis might be the right approach, but notice that no existing AI risk org does that. They all require physical presence. 4novalinium5hAnthropic [https://www.anthropic.com/] does not require consistent physical presence. 4Vanessa Kosoy5hAFAICT, Anthropic is not an existential AI safety org per se, they're just doing a very particular type of research which might help with existential safety. But also, why do you think they don't require physical presence? 5Vaniver4hI believe Anthropic doesn't expect its employees to be in the office every day, but I think this is more pandemic-related than it is a deliberate organizational design choice; my guess is that most Anthropic employees will be in the office a year from now. [-]nostalgebraist1d 113 First, thank you for writing this. Second, I want to jot down a thought I've had for a while now, and which came to mind when I read both this and Zoe's Leverage post. To me, it looks like there is a recurring phenomenon in the rationalist/EA world where people... * ...become convinced that the future is in their hands: that the fate of the entire long-term future ("the future light-cone") depends on the success of their work, and the work of a small circle of like-minded collaborators * ...become convinced that (for some reason) only they, and their small circle, can do this work (or can do it correctly, or morally, etc.) -- that in spite of the work's vast importance, in spite of the existence of billions of humans and surely at least thousands with comparable or superior talent for this type of work, it is correct/necessary for the work to be done by this tiny group * ...become less concerned with the epistemic side of rationality -- "how do I know I'm right? how do I become more right than I already am?" -- and more concerned with gaining control and influence, so that the long-term future may be shaped by their own (already-obviously-correct) views * ...spend more effort on self-experimenta ... (read more) Reply [-]cousin_it1d 47 Maybe offtopic, but the "trying too hard to try" part rings very true to me. Been on both sides of it. The tricky thing about work, I'm realizing more and more, is that you should just work. That's the whole secret. If instead you start thinking how difficult the work is, or how important to the world, or how you need some self-improvement before you can do the work effectively, these thoughts will slow you down and surprisingly often they'll be also completely wrong. It always turns out later that your best work wasn't the one that took the most effort, or felt the most important at the time; you were just having a nose-down busy period, doing a bunch of things, and only the passage of time made clear which of them mattered. Reply 2[comment deleted]18h [-]Davis_Kingsley15h 45 I worked for CFAR full-time from 2014 until mid to late 2016, and have worked for CFAR part-time or as a frequent contractor ever since. My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren't really the main thing. I do think CFAR has not made as much research progress as I would like, but I think the reasoning for that is much more mundane and less esoteric than the pattern you describe here. The fact of the matter is that for almost all the time I've been involved with CFAR, there just plain hasn't been a research team. Much of CFAR's focus has been on running workshops and other programs rather than on dedicated work towards extending the art; while there have occasionally been people allocated to research, in practice even these would often end up getting involved in workshop preparation and the like. To put things another way, I would say it's much less "the full-time researchers are off unproductively experimenting on their own brains in secret" and more "there are no full-time researchers". To the best of my knowledge CFAR has not ever had what I would consider a systematic research and development program -- ... (read more) Reply [-]hg0013h 26 Does anyone have thoughts about avoiding failure modes of this sort? Especially in the "least convenient possible world" where some of the bullet points are actually true -- like, if we're disseminating principles for wannabe AI Manhattan Projects, and we're optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate? --------------------------------------------------------------------- Most of my ideas are around "staying grounded" -- spend significant time hanging out with "normies" who don't buy into your worldview, fully unplug from work at least one day per week, have hobbies outside of work (perhaps optimizing explicitly for escapism in the form of computer games, TV shows, etc.) Possibly live somewhere other than the Bay Area, someplace with fewer alternative lifestyles and a stronger sense of community. (I think Oxford has been compared favorably to Berkeley with regard to presence of homeless people, at least.) But I'm just guessing, and I encourage others to share their thoughts. Especially people who've observed/experienced mental health crises firsthand -- how could they have been prevented? EDIT: I'm also curious how to think about scrupulosity. It ... (read more) Reply [-]romeostevensit7h 36 IMO, A large number of mental health professionals simply aren't a good fit for high intelligence people having philosophical crises. People know this and intuitively avoid the large hassle and expense of sorting through a large number of bad matches. Finding solid people to refer to who are not otherwise associated with the community in any way would be helpful. Reply [-]Rob Bensinger4h 19 I know someone who may be able to help with finding good mental health professionals for those situations; anyone who's reading this is welcome to PM me for contact info. Reply [-]ozziegooen3h 12 There's an "EA Mental Health Navigator" now to help people connect to the right care. https://eamentalhealth.wixsite.com/navigator I don't know how good it is yet. I just emailed them last week, and we set up an appointment for this upcoming Wednesday. I might report back later, as things progress. Reply [-]ChristianKl8h 15 I do think that encouraging people to stay in contact with their family and work to have good relationships is very useful. Family can provide a form of grounding that having small talk with normies while going dancing or persuing other hobbies doesn't provide. When deciding whether a personal development group is culty I think it's a good test to ask whether or not the work of the group lead to the average person in the group having better or worse relationships with their parents. Reply 9Avi13hI agree, and think it's important to 'stay grounded' in the 'normal world' if you're involved in any sort of intense organization or endeavor. You've made some great suggestions. I would also suggest that having a spouse who preferably isn't too involved, or involved at all, and maybe even some kids, is another commonality among people who find it easier to avoid going too far down these rabbit holes. Also, having a family is positive in countless other ways, and what I consider part of the 'good life' for most people. [-]TekhneMakre1d 18 It would be both surprising news, and immensely bad news, to learn that only a tiny group of people could (or should) work on such a problem -- that would mean applying vastly less parallel "compute" to the problem, relative to what is theoretically available, and that when the problem is forbiddingly difficult to begin with. I have substantial probability on an even worse state: there's *multiple* people or groups of people, *each* of which is *separately* necessary for AGI to go well. Like, metaphorically, your liver, heart, and brain would each be justified in having a "rarity narrative". In other words, yes, the parallel compute is necessary--there's lots of data and ideas and thinking that has to happen--but there's a continuum of how fungible the compute is relative to the problems that need to be solved, and there's plenty of stuff at the "not very fungible but very important" end. Blood is fungible (though you definitely need it), but you can't just lose a heart valve, or your hippocampus, and be fine. Reply [-]nostalgebraist1d 18 I didn't mention it in the comment, but having a larger pool of researchers is not only useful for doing "ordinary" work in parallel -- it also increases the rate at which your research community discovers and accumulates outlier-level, irreplaceable genius figures of the Euler/Gauss kind. If there are some such figures already in the community, great, but there are presumably others yet to be discovered. That their impact is currently potential, not actual, does not make its sacrifice any less damaging. Reply 5TekhneMakre1dYep. (And I'm happy this overall discussion is happening, partly because, assuming rarity narratives are part of what leads to all this destructive psychic stuff as you described, then if a research community wants to work with people about whom rarity narratives would actually be somewhat *true*, the research community has as an important subgoal to figure out how to have true rarity narratives in a non-harmful way.) [-]Gunnar_Zarncke1d 15 Most of these bullet points seem to apply to some degree to every new and risky endeavor ever started. How risky things are is often unclear at the start. Such groups are build from committed people. Small groups develop their own dynamics. Fast growth leads to social growing pains. Lack of success leads to a lot of additional difficulties. Also: Evaporative cooling. And if (partial) success happens even more growth leads to needed management level etc etc. And later: Hindsight bias. Reply 7Elizabeth18hWithout commenting on the object level, I am really happy to see someone lay this out in terms of patterns that apply to a greater or lesser extent, with correlations but not in lockstep. -15TAG1d [-]Vanessa Kosoy2d 99 Full disclosure: I am a MIRI Research Associate. This means that I receive funding from MIRI, but I am not a MIRI employee and I am not privy to its internal operation or secrets. First of all, I am really sorry you had these horrible experiences. A few thoughts: Thought 1: I am not convinced the analogy between Leverage and MIRI/ CFAR holds up to scrutiny. I think that Geoff Anders is most likely a bad actor, whereas MIRI/CFAR leadership is probably acting in good faith. There seems to be significantly more evidence of bad faith in Zoe's account than in Jessica's account, and the conclusion is reinforced by adding evidence from other accounts. In addition, MIRI definitely produced some valuable public research whereas I doubt the same can be said of Leverage, although I haven't been following Leverage so I am not confident about the latter (ofc it's in principle possible for a deeply unhealthy organization to produce some good outputs, and good outputs certainly don't excuse abuse of personnel, but I do think good outputs provide some evidence against such abuse). It is important not to commit the fallacy of gray: it would risk both judging MIRI/CFAR too harshly and judging Leverage in... (read more) Reply [-]ChristianKl1d 11 Thought 2: From my experience, AI alignment is a domain of research that intrinsically comes with mental health hazards. First, the possibility of impending doom and the heavy sense of responsibility are sources of stress. Second, research inquiries often enough lead to "weird" metaphysical questions that risk overturning the (justified or unjustified) assumptions we implicitly hold to maintain a sense of safety in life. I think it might be the closest thing in real life to the Lovecraftian notion of "things that are best not to know because they will drive you mad". Third, the sort of people drawn to the area and/ or having the necessary talents seem to often also come with mental health issues (I am including myself in this group). That sounds like MIRI should have a councillor on it's staff. Reply [-]philip_b1d 22 That would make them more vulnerable to claims that they use organizational mind control on their employees, and at the same time make it more likely that they would actually use it. Reply [-]ChristianKl14h 14 You would likely hire someone who's traditionally trained, credentialed and has work experience instead of doing a bunch of your own psych-experiments, likely in a tradition like gestalttherapy that focuses on being nonmanipulative. Reply [-]benjamin.j.campbell8h 10 There's an easier solution that doesn't run the risk of being or appearing manipulative. You can contract external and independent councillors and make them available to your staff anonymously. I don't know if there's anything comparable in the US, but in Australia they're referred to as Employee Assistance Programs (EAPs). Nothing you discuss with the councillor can be disclosed to your workplace, although in rare circumstances there may be mandatory reporting to the police (e.g. if abuse or ongoing risk of a minor is involved). This also goes a long way toward creating a place where employees can talk about things they're worried will seem crazy in work contexts. Reply 7ChristianKl8hSolutions like that might work, but it's worth noting that just having an average therapist likely won't be enough. If you actually care about a level of security that protects secrets against intelligence agencies, operational security of the office of the therapist is a concern. Governments that have security clearances don't want their employees to talk with therapists who don't have the secuirty clearances about classified information. Talking nonjudgmentally with someone who has reasonable fears that the humanity won't survive the next ten years because of fast AI timelines is not easy. 7jessicata1dAs far as I can tell, normal corporate management is much worse than Leverage. The kind of people from that world will, sometimes when prompted in private conversations, say things like: * Standard practice is to treat negotiations with other parties as zero-sum games. * "If you look around the table and can't tell who the sucker is, it's you" is a description of a common, relevant social dynamic in corporate meetings. * They have PTSD symptoms from working in corporate management, and are very threat-sensitive in general. * They learned from experience to treat social reality in general as fake, everything as an act. * They learned to accept that "there's no such thing as not being lost", like they've lost the ability to self-locate in a global map (I've experienced losing this to a significant extent). * Successful organizations get to be where they are by committing crimes, so copying standard practices from them is copying practices for committing crimes. This is, to a large extent, them admitting to being bad actors, them and others having been made so by their social context. (This puts the possibility of "Geoff Anders being a bad actor" into perspective) MIRI is, despite the problems noted in the post, as far as I can tell the most high-integrity organization doing AI safety research. FHI contributes some, but overall lower-quality research; Paul Christiano does some relevant research; OpenAI's original mission was actively harmful, and hasn't done much relevant safety research as far as I can tell. MIRI's public output in the past few years since I left has been low, which seems like bad sign for its future performance, but what it's done so far has been quite a large portion of the relevant research. I'm not particularly worried about scandals sinking the overall non-MIRI AI safety world's reputation, given the degree to which it is of mixed value. [-]nostalgebraist1d 103 As far as I can tell, normal corporate management is much worse than Leverage Your original post drew a comparison between MIRI and Leverage, the latter of which has just been singled out for intense criticism. If I take the quoted sentence literally, you're saying that "MIRI was like Leverage" is a gentler critique than "MIRI is like your regular job"? If the intended message was "my job was bad, although less bad than the jobs of many people reading this, and instead only about as bad as Leverage Research," why release this criticism on the heels of a post condemning Leverage as an abusive cult? If you believe the normally-employed among LessWrong readers are being abused by sub-Leverage hellcults, all the time, that seems like quite the buried lede! Sorry for the intense tone, it's just ... this sentence, if taken seriously, reframes the entire post for me in a big, weird, bad way. Reply [-]jessicata1d 14 I thought I was pretty clear, at the end of the post, that I wasn't sad that I worked at MIRI instead of Google or academia. I'm glad I left when I did, though. The conversations I'm mentioning with corporate management types were suprising to me, as were the contents of Moral Mazes, and Venkatesh Rao's writing. So "like a regular job" doesn't really communicate the magnitude of the harms to someone who doesn't know how bad normal corporate management is. It's hard for me to have strong opinions given that I haven't worked in corporate management, though. Maybe a lot of places are pretty okay. I've talked a lot with someone who got pretty high in Google's management hierarchy, who seems really traumatized (and says she is) and who has a lot of physiological problems, which seem overall worse than mine. I wouldn't trade places with her, mental health-wise. MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI. I discussed with some friends about the benefits of working at Leverage vs. MIRI vs. the US Marines, and we agreed that Leverage and MIRI were probably overall less problematic, but the fact that the US marines signal that they're going to dominate/abuse people is an important advantage relative to the alternatives, since it sets expectations more realistically. Reply [-]Eli Tyre20h 61 MIRI wouldn't make sense as a project if most regular jobs were fine, people who were really ok wouldn't have reason to build unfriendly AI. I just want to note that this is a contentious claim. There is a competing story, and one much more commonly held among people who work for or support MIRI, that the world is heading towards an unaligned intelligence explosion due to the combination of a coordination problem and very normal motivated reasoning about the danger posed by lucrative and prestigious projects. One could make the claim "healthy" people (whatever that means) wouldn't exhibit those behaviors, ie that they would be able to coordinate and avoid rationalizing. But that's a non-standard view. I would prefer that you specifically flag it as a non-standard view, and then either make the argument for that view over the more common one, or highlight that you're not going into detail on the argument and that you don't expect others to accept the claim. As it is, it feels a little like this is being slipped in as if it is a commonly accepted premise. Reply [-]jessicata20h 14 I agree this is a non-standard view. Reply 0Dr_Manhattan12hYes, I would! Any pointers? (to avoid miscommunication I'm reading this to say that people are more likely to build UFAI because of traumatizing environment vs. normal reasons Eli mentioned) [-]Vaniver1d 28 Note that there's an important distinction between "corporate management" and "corporate employment"--the thing where you say "yeesh, I'm glad I'm not a manager at Google" is substantially different from the thing where you say "yeesh, I'm glad I'm not a programmer at Google", and the audience here has many more programmers than managers. [And also Vanessa's experience matches my impressions, tho I've spent less time in industry.] [EDIT: I also thought it was clear that you meant this more as a "this is what MIRI was like" than "MIRI was unusually bad", but I also think this means you're open to nostalgebraist's objection, that you're ordering things pretty differently from how people might naively order them.] Reply [-]iceman9h 21 My experience was that if you were T-5 (Senior), you had some overlap with PM and management games, and at T-6 (Staff), you were often in them. I could not handle the politics to get to T-7. Programmers below T-5 are expected to earn promotions or to leave. Google's a big company, so it might have been different elsewhere internally. My time at Google certainly traumatized me, but probably not to the point of anything in this or the Leverage thread. Reply [-]jefftk1h 15 Programmers below T-5 are expected to earn promotions or to leave. This changed something like five years ago, to where people at level four (one level above new grad) no longer needed to get promoted to stay long term. Reply 9T3t21hI think maybe a bit of the confusion here is nostalgebraist reading "corporate management" to mean something like "a regular job in industry", whereas you're pointing at "middle- or upper-management in sufficiently large or maze-like organizations"? Because those seem very different to me and I could imagine the second being much worse for people's mental health than the first. Separately I'm confused about the claim that "people who were really ok wouldn't have reason to build unfriendly AI"; it sounds like you don't agree that the idea that UFAI is the default outcome from building AFI without a specific effort to make it friendly? (This is probably a distraction from this threads' subject but I'd be interested to read your thoughts on that if you've written them up somewhere.) [-]jessicata21h 11 I think maybe a bit of the confusion here is nostalgebraist reading "corporate management" to mean something like "a regular job in industry", whereas you're pointing at "middle- or upper-management in sufficiently large or maze-like organizations"? Yes, that seems likely. I did some interships at Google as a software engineer and they didn't seem better than working at MIRI on average, although they had less intense psychological effects, as things didn't break out in fractal betrayal during the time I was there. Separately I'm confused about the claim that "people who were really ok wouldn't have reason to build unfriendly AI" People might think they "have to be productive" which points at increasing automation detached from human value, which points towards UFAI. Alternatively, they might think there isn't a need to maximize productivity, and they can do things that would benefit their own values, which wouldn't include UFAI. (I acknowledge there could be coordination problems where selfish behavior leads to cutting corners, but I don't think that's the main driver of existential risk failure modes) Reply [-]Vanessa Kosoy1d 58 I worked for 16 years in the industry, including management positions, including (briefly) having my own startup. I talked to many, many people who worked in many companies, including people who had their own startups and some with successful exits. The industry is certainly not a rose garden. I encountered people who were selfish, unscrupulous, megalomaniac or just foolish. I've seen lies, manipulation, intrigue and plain incompetence. But, I also encountered people who were honest, idealistic, hardworking and talented. I've seen teams trying their best to build something actually useful for some corner of the world. And, it's pretty hard to avoid reality checks when you need to deliver a real product for real customers (although some companies do manage to just get more and more investments without delivering anything until the eventual crash). I honestly think most of them are not nearly as bad as Leverage. Reply 6Dojan1dPlus a million points for "IMO it's a reason for less secrecy"! If you put a lid on something you might contain it in the short term, but only at the cost of increasing the pressure: And pressure wants out, and the higher the pressure the more explosive it will be when it inevitably does come out. I have heard too many accounts like this, in person and anecdotally, on the web and off for me to currently be interested in working or even getting to closely involved with any of the organizations in question. The only way to change this for me is to believably cultivate a healthy, transparent and supportive environment. This made me go back and read "Every Cause wants to be a Cult" (Eliezer, 2007) [https://www.lesswrong.com/ posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult] , which includes quotes like this one: "Here I just want to point out that the worthiness of the Cause does not mean you can spend any less effort in resisting the cult attractor. And that if you can point to current battle lines, it does not mean you confess your Noble Cause unworthy. You might think that if the question were, "Cultish, yes or no?" that you were obliged to answer, "No," or else betray your beloved Cause." [-]CronoDAS1d 88 One takeaway I got from this when combined with some other stuff I've read: Don't do psychedelics. Seriously, they can fuck up your head pretty bad and people who take them and organizations that encourage taking them often end up drifting further and further away from normality and reasonableness until they end up in Cloudcuckooland. Reply [-]Eliezer Yudkowsky1d 129 I'm about ready to propose a group norm against having any subgroups or leaders who tell other people they should take psychedelics. Maybe they have individually motivated uses - though I get the impression that this is, at best, a high-variance bet with significantly negative expectation. But the track record of "rationalist-adjacent" subgroups that push the practice internally and would-be leaders who suggest to other people that they do them seems just way too bad. I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc. I still think it's not our community business to try to socially prohibit things like that on an individual level by exiling individuals like that from parties, I don't think we have or should have that kind of power over individual behaviors that neither pick pockets nor break legs. But I think that when there's anything like a subgroup or a leader with those properties we need to be ready to say, "Yeah, that's not a group in good standing with the rest of us, don't go there." Th... (read more) Reply [-]Rob Bensinger21h 103 Copying over a related Oct. 13-17 conversation from Facebook: (context: someone posted a dating ad in a rationalist space where they said they like tarot etc., and rationalists objected) _____________________________________________ Marie La: As a cultural side note, most of my woo knowledge (like how to read tarot) has come from the rationalist community, and I wouldn't have learned it otherwise _____________________________________________ Eliezer Yudkowsky: @Marie La Any ideas how we can stop that? (+1 from Rob B) _____________________________________________ Marie La: Idk, it's an introspective technique that works for some people. Doesn't particularly work for me. Sounds like the concern is bad optics / PR rather than efficacy (+1 from Rob B) _____________________________________________ Shaked Koplewitz: @Marie La optics implies that the concern is with the impression it makes on outsiders, my concern here is the effect on insiders (arguably this is optics too, but a non-central example) _____________________________________________ Rob Bensinger: If the concern is optics, either to insiders or outsiders, then it seems vastly weaker to me than i... (read more) Reply [-]Vaniver19h 15 Somehow this reminds me of the time I did a Tarot reading for someone, whose only previous experience had been Brent Dill doing a Tarot reading, and they were... sort of shocked at the difference. (I prefer three card layouts with a simple context where both people think carefully about what each of the cards could mean; I've never seen his, but the impression I got was way more showmanship.) Reply [-]Gunnar_Zarncke11h 11 If it works as a device to facilitate sub-conscious associations, then maybe an alternative should be designed that sheds the mystical baggage and comes with clear explanations of why and how it works. Reply 3jefftk1hI'm generally very anti-woo, but I expect presenting it clearly and without baggage would make it stop working because the participant would be in a different mental state. 2Gunnar_Zarncke30mWell, if that is true then that would be another avenue to research mental states. Something that is clearly needed. But what I really wanted to say: You shouldn't do it if you can't formulate hypotheses and do experiments for it. [-]Viliam1d 40 Thank you for saying this! I wonder where the line will be drawn with regards to the { meditation, Buddhism, post-rationality, David Chapman, etc. } cluster. On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc. Also, Christianity is an outgroup, but Buddhism is a fargroup, so people seem less averse to religious connotations; in my opinion, it's just the different flavor of the same poison. Buddhism is sometimes advertized as a kind of evidence-based philosophy, but then you read the books and they discuss the supernatural and describe the miracles done by Buddha. Plus the insights into your previous lives, into the ultimate nature of reality (my 200 Hz brain sees the quantum physics, yeah), etc. Also, somewhat ironically... Marcello and I developed a convention in our AI work: when we ran into something we didn't understand, which was often, we would say "magic"--as in, X magically does Y"--to remind ourselves that here was an unsolved problem, a gap in our understanding. It is far better to say "magic" than "compl ... (read more) Reply [-]Rob Bensinger4h 41 Regarding meditation, Kevin Fischer reported a surprising-to-me anecdote on FB yesterday: I had one conversation with Soryu [the head of Monastic Academy / MAPLE] at a small party once. I mentioned that my feeling about meditation is that it's really good for everyone when done for 15 minutes a day, and when done for much more than that forever, it's much more complicated and sometimes harmful. He straightforwardly agreed, and said he provides the environment for long term dedication to meditation because there is a market demand for that product. Reply [-]wunan1d 27 On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc. I think meditation should be treated similarly to psychedelics -- even for meditators who don't think of it in terms of anything supernatural, it can still have very large and unpredictable effects on the mind. The more extreme the style of meditation (e.g. silent retreats), the more likely this sort of thing is. Any subgroups heavily using meditation seem likely to have the same problems as the ones Eliezer identified for psychedelics/woo/ supernaturalism. Reply 3Gunnar_Zarncke10hI have pointed out the risks of meditation and meditation-like practices before. The last time was on the Shoulder Advisors [https://www.lesswrong.com/posts/X79Rc5cA5mSWBexnd/ shoulder-advisors-101?commentId=BTS2s6JiiKqEkYju2] which does seem to fall on the boundary. I have experience with meditation and have been to extended silent meditation retreats with only positive results. Nonetheless, bad trips are possible - esp. without a supportive teacher and/or community. But I wouldn't make a norm against groups fostering meditation. Meditation depends on groups for support (though the same might be said about psychedelics). Meditation is also a known way to gain high levels of introspective awareness and to have many mental health benefits (many posts about that on LW I'm too lazy to find). The group norm about these things should be to require oversight by a Living Tradition of Knowledge [https:// www.lesswrong.com/posts/nnNdz7XQrd5bWTgoP/ on-the-loss-and-preservation-of-knowledge] in the relevant area (for meditation e.g. an established - maybe even Buddhist - meditation school). 3Kenny 16hPsychedelics, woo, and meditation are very separate stuff. They are often used in conjunction with each other due to popularity and the context some of these things are discussed along with each other. Buddhism has incorporated meditation into its woo while other religions have mostly focused on group based services in terms of talking about their woos. I like how some commenters have grouped psychedelics and meditation separate of the woo stuff, but it was a bit surprising to me to see Eliezer dismissing psychedelics along with woo in the same statements. He probably hasn't taken psychedelics before. Meditation is quite different as in it's more of a state of mind as opposed to an altered mentality. With psychedelics there is a clear distinction between when you are tripping and when you aren't tripping. With meditation, it's not so clear when you are meditating and when you aren't. Woo is just putting certain ideas into words, which has nothing to do with different mindset/ mentalities. 2Laszlo_Treszkai8hHowever, according to some [https:// slatestarcodex.com/2017/09/18/ book-review-mastering-the-core-teachings-of-the-buddha/] , even meditation done properly can have negative effects, which would be similar to psychedelics but manifesting slower and through your own effort. Quoted from the book review: 1Kenny 7hI don't think I was advocating for either. I apologize if I came off as saying people should try psychedelics and meditation. [-]Bjartur Tomas8h 19 Even in the case of Sam Harris, who seems relatively normal, he lost a decade of his life pursuing "enlightenment" though meditation - also notable is this was spurred on by psychedelic use. Though I am sure he would not agree with the frame that it was a waste, I read his *Waking Up* as a bit of a horror story. For someone without his high IQ and indulgent parents, you could imagine more horrible ends. I know of at least one person who was bright, had wild ambitious ideas, and now spends his time isolated from his family inwardly pursuing "enlightenment." And this through the standard meditation + psychedelics combination. I find it hard to read this as anything other than wire-heading, and I think a good social norm would be one where we consider such behavior as about as virtuous as obsessive masturbation. In general, for any drug that produces euphoria, especially spiritual euphoria, the user develops an almost romantic relationship with their drug, as the feelings they inspire are just as intense (and sometimes more so) as familial love. One should at least be slightly suspicious of the benefits propounded by their users, who in many cases literally worship their drugs of choice. Reply [-]Aella4h 54 fwiw as a data point here, I spent some time inwardly pursuing "enlightenment" with heavy and frequent doses of psychedelics for a period of 10 months and consider this to be one of the best things I've ever done. I believe it raised my resting set point happiness, among other good things, and I am still deeply altered (7 years later). I do not think this is a good idea for everyone and lots of people who try would end up worse off. But I strongly object to this being seen as virtuous as obsessive masturbation. Sure, it might not be your thing, but this frame seriously misses a huge amount of really important changes in my experience. And I get you might think I'm... brainwashed or something? by drugs? So I don't know what I could say that would convince you otherwise. But I did have concrete things, like solving a pretty big section of childhood trauma (like; I had a burning feeling of rage in my chest before, and the burning feeling was gone afterwards), I had multiple other people comment on how different I was now (usually in regards to laughing easier and seeming more relaxed), I lost my anxiety around dying, my relationship to pain altered in such a way that I am significantly ... (read more) Reply [-]Bjartur Tomas3h 20 And I get you might think I'm... brainwashed or something? by drugs? I'm not sure what you find implausible about that. Drugs do not literally propagandize the user, but they can hijack the reward system, in the case of many drugs, and in the case of psychedelics they seem to alter beliefs in reliable ways. Psychedelics are also taken in a memetic context with many crystalized notions about what the psychedelic experience is, what enlightenment is, that enlightenment itself is a mysterious but worthy pursuit. The classic joke about psychedelics is they provide the feelings associated with profound insights without the actual profound insights. To the extent this is true, I feel this is pretty dangerous territory for a rationalist to tread. In your own case unless I am misremembering, I believe on your blog you discuss LSD permanently [S:lowering your:S] [S:mathematical abilities:S] degrading your memory. This seems really, really bad to me... Maybe this one is less concrete, but some part of me feels really deeply at peace, always, like it knows everything is going to be ok and I didn't have that before. I'm glad your anxiety is gone, but I don't think everything is going to be alright by default. I would not like to modify myself to think that. It seems clearly untrue. Perhaps the masturbation line was going too far. But the gloss of virtue that "seeking enlightenment" has strikes me as undeserved. Reply [-]Aella3h 18 Also fwiw, I took psychedelics in a relatively memetic-free environment. I'd been homeschooled and not exposed to hippie/drug culture, and especially not significant discussion around enlightenment. I consider this to be one of the reasons my experience was so successful; I didn't have it in relationship to those memes, and did not view myself as pursuing enlightenment (I know I said I was inwardly pursuing enlightenment in my above comment, but I was mostly riffing off your phrasing; in some sense I think it was true but it wasn't a conscious thing.) LSD did not permanently lower my mathematical abilities, and if I suggested that I probably misspoke? I suspect it damaged my memory, though; my memory is worse now than before I took LSD. And sorry; by 'everything being ok' I didn't mean that I literally think that situation will end up being the ones I want; I mean that I know I will be okay with whatever happens. Very related to my endurance of pain going up by quite a lot, and my anxiety of death disappearing. Separately, I do think that a lot of the memes around psychedelics are... incomplete? It's hard to find a good word. Naive? Something around the difference between the aesthetic of a thing and the thing itself? And in that I might agree with you somewhere that "seeking enlightenment" isn't... virtuous or whatever. Reply 4Bjartur Tomas3hThanks. Corrected; I probably conflated the two. But my feeling towards that change are the same so the line otherwise remains unchanged. I should probably organize my opinons/feelings on this topic and write an effortpost or something rather than hash it out in the comments. [-]Duncan_Sabien4h 20 In my culture, it's easy to look at "what happens at the ends of the bell curves" and "where's the middle of the bell curve" and "how tight vs. spread out is the bell curve (i.e. how different are the ends from the middle)" and "are there multiple peaks in the bell curves" and all of that, separately. Like, +1 for the above, and I join the above in giving a reminder that rounding things off to "thing bad" or "thing good" is not just not required, it's actively unhelpful. Policies often have to have a clear answer, such as the "blanket ban" policy that Eliezer is considering proposing. But the black-or-white threshold of a policy should not be confused with the complicated thing underneath being evaluated. Reply [-]Holly_Elmore2h 15 Western Buddhism tends to be more of a bag of wellness tricks than a religion, but it's worth sharing that Buddhism proper is anti-life. It came out of a Hindu obsession with ending the cycle of reincarnation. Nirvana means "cessation." The whole idea of meditation is to become tolerant of signals to action so you can let them pass without doing the things that replicate them or, ultimately, propagate any life-like process. Karma is described a giant wheel that powers reincarnation and gains momentum whenever you act unconsciously. The goal is for the wheel to stop moving and the way is to unlearn your habit of kicking it. When the Buddha became enlightened under the Bodhi tree, it wasn't actually complete enlightenment. He was "enlightened with residues"-- he stopped making new karma but he was still burning off old karma. He achieved actual creation when he died. To be straight up enlightened, you stop living. The whole project of enlightenment is to end life. It's a sinister and empty philosophy, IMO. A lot of the insights and tools are great but the thrust of (at least Theravada) Buddhism is my enemy. Reply 5Rob Bensinger33mI agree this is pretty sinister and empty. Traditional samsara includes some pretty danged nice places (the heavens), not just things that have Earth-like quantities or qualities of flourishing; so rejecting all of that sounds very anti-life. Some complicating factors: * It's not clear (to put it lightly) what parinirvana (post-death nirvana / escape from samsara) entails. Some early Buddhists seem to have thought of it as more like oblivion/cessation; others seem to have thought of it as more like perfectly blissful experience. (Obviously, this becomes more anti-life when you get rid of supernaturalism -- then the only alternative to 'samsara' is oblivion. But the modern Buddhist can retreat to various mottes about what 'nirvana' is, such as embracing living nirvana (sopadhishesa-nirvana) while rejecting parinirvana.) * The Buddhists have a weird psychological theory according to which living in samsara inherently sucks. Liking or enjoying things is really just another species of bad. The latter view is still pretty anti-life, but notably, it's a psychological claim ('this is what it's really like to experience things'), not a normative claim that we should reject life a priori. If a Buddhist updates away from thinking everything is dukkha, they aren't necessarily required to reject life anymore -- the life-rejection wasn't contingent on the psych theory. 4Kaj_Sotala11mThere are also versions of the psychological theory in which dukkha is not associated with all motivation, just the craving-based system [https://www.lesswrong.com/posts/ gvXFBaThWrMsSjicD/craving-suffering-and-predictive-processing-three] , which is in a sense "extra"; it's a layer on top of [https:// www.lesswrong.com/posts/r6kzvdia4S8TKE6WF/ from-self-to-craving-three-characteristics-series# Craving_as_a_second_layer_of_motivation] the primary motivation system, which would continue to operate even if all craving was eliminated. Under that model (which I think is the closest to being true), you could (in principle) just eliminate the unpleasant parts of human motivation, while keeping the ones that don't create suffering - and probably get humans who were far more alive as a result, since they would be far more willing to do even painful things if pain no longer caused them suffering. Pain would still be a disincentive in the same way that a reinforcement learner would generally choose to take actions that brought about positive rather than negative reward, but it would make it easier for people to voluntarily choose to experience a certain amount of pain in exchange for better achieving their values afterwards, for instance. 4Rafael Harth17mI don't think this is true, at leas not insofar as it describes the original philosophy. You may be thinking about the first noble truth "The truth of Dukkha", but Dukkha is not correctly translated as suffering. A better translation is "unsatisfactoriness". For example, even positive sensations are Dukkha, according to the Buddha. I think the intention of the first noble truth is to say that worldly sensations, positive and negative, are inherently unsatisfactory. The Buddha has also said pretty explicitly that a great happiness can be achieved through the noble path, which seems to directly contradict the idea that life inherently sucks, and that suffering can be overcome. (However, there may be things he's said that support the quote; I'm definitely not claiming to have a full or even representative view.) 4Kaj_Sotala26mI'm willing to grant that there are certain interpretations of Buddhism that take this view, but object pretty strongly to depicting it as the idea of meditation. Especially since there are many different varieties of meditation, with varying degrees of (in)compatibility with this goal; something like loving-kindness or shi-ne meditation [https://vajrayananow.com/ shi-ne-meditation] seem both more appropriate for creating activity, for instance. In my view, there are so many varieties and interpretations of Buddhism that pointing to some of them having an anti-life view always seems like a weird sleight of hand to me. By saying that Buddhism originates as an anti-life practice, one can then imply that all of its practices also tend to lead towards that goal, without needing to establish that that's actually the case. After all, just because some of the people who developed such techniques wanted to create an anti-life practice doesn't mean that they actually succeeded in developing techniques that would be particularly well-suited for this goal. I agree that it's possible to use them for such a goal, especially if they're taught in the context of an ideology that frames the practice that way, but I don't think them to be very effective for that goal even then. 2Unreal1hThis view is disputed and countered in the original texts. It is worth it to me to mention this, but I am not the right one to go into details. 5steven04612hI wonder what the rationalist community would be like if, instead of having been forced to shape itself around risks of future superintelligent AI in the Bay Area, it had been artificial computing superhardware in Taiwan, or artificial superfracking in North Dakota, or artificial shipping supercontainers in Singapore, or something. (Hypothetically, let's say the risks and opportunities of these technologies were equally great and equally technically and philosophically complex as those of AI in our universe.) [-]Vaniver1d 25 I'm also about ready to propose a similar no-such-group policy on 'woo', tarot-reading, supernaturalism only oh no it's not really supernaturalism I'm just doing tarot readings as a way to help myself think, etc. Hmm. I can't tell if the second half is supposed to be pointing at my position on Tarot, or the thing that's pretending to be my position but is actually confused? Like, I think the hitrate for 'woo' is pretty low, and so I spend less time dredging there than I do other places which are more promising, but also I am not ashamed of the things that I've noticed that do seem like hits there. Like, I haven't delivered on my IOU to explain 'authenticity' yet, but I think Circling is actually a step above practices that look superficially similar in a way we could understand rigorously, even if Circling is in a reference class that is quite high in woo, and many Circlers like the flavor of woo. That said, I could also see an argument that's like "look, we really have to implement rules like this at a very simple level or they will get bent to hell, and it's higher EV to not let in woo." Reply [-]Holly_Elmore1d 14 Would it be acceptable to regard practices like self-reflective tarot and circling and other woo-adjacent stuff as art rather than an attempt at rationality? I think it is a danger sign when people are claiming those highly introspective and personal activities as part of their aspiring to rationality. Can we just do art and personal emotional and creative discovery and not claim that it's directly part of the rationalist project? Reply [-]Vaniver1d 15 I mean, I also do things that I would consider 'art' that I think are distinct from rationality. But, like, just like I wouldn't really consider 'meditation' an art project instead of 'inner work' or 'learning how to think' or w/e, I wouldn't really consider Circling an art project instead of those things. Reply [-]Holly_Elmore16h 12 I would consider meditation and circling to have the same relationship to "discovering the truth" as art. The insights can be real and profound but are less rigorous and much more personal. Reply [-]Duncan_Sabien1d 14 There are some potential details that might swing one way or the other (Vaniver's comment points at some), but as-written above, and to the best of my ability to predict what such a proposal would actually look like once Eliezer had put effort into it: I expect I would wholeheartedly and publicly endorse it, and be a signatory/adopter. Reply [-]Unreal3h 12 I feel tempted to mostly agree with Eliezer here... Umm To relay a trad Buddhist perspective, you're not (traditionally) supposed to make a full-blown attempt for 'enlightenment' or 'insight' until you've spent a fairly extensive time working on personal ethics & discipline. I think an unnamed additional step is to establish your basic needs, like good community, health, food, shelter, etc. It's also recommended that you avoid drugs, alcohol, and even sex. There's also an important sense I get from trad Buddhism, which is: If you hold a nihilistic view, things will go sideways. A subtle example of nihilism is the sense that "It doesn't matter what I do or think because it's relatively inconsequential in the scheme of things, so whatever." or a deeper hidden sense of "It doesn't really matter if everyone dies." or "I feel it might be better if I just stopped existing?" or "I can think whatever I want inside my own head, including extensive montages of murder and rape, because it doesn't really affect anything." These views seem not uncommon among modern people, and subtler forms seem very common. Afaict from reading biographies, modern people have more trouble wit... (read more) Reply [-]ChristianKl14h 12 Instead of declaring group norms I think it would be worth it to have posts that actually lay out the case in a convincing manner. In general there are plenty of contrarian rationalists for whom "it's a group norm" is not enough to not do something. Declaring it as a drug norm might get them to be more secretive about it which is bad. Trying to solve issues about people doing the wrong things with group norms instead of with deep arguments doesn't seem to be the rationalist way. Reply 6Gunnar_Zarncke10hCan you propose a norm that avoids the pitfalls? 6ChristianKl8hHave the important conversations about why you shouldn't take drugs / engage in woo openly on LessWrong instead only having them only privately where it doesn't reach many people. Then confront people who suggest something in that direction with those posts. 1TekhneMakre12h+1 2Chris_Leong20hI guess I'd suggest thinking about targets carefully. A lot of people are going to experiment with psychedelics anyway and it's safer for people to do so within a group, assuming the group is actually trustworthy and not attempting to brainwash people. [-]Kaj_Sotala1d 46 OTOH a significant amount of (seemingly sane) people credit psychedelics for important personal insights and mental health/trauma healing. Psychedelics seem to be showing enough promise for that for the psychiatric establishment to be getting interested in them again [1, 2] despite them having been stigmatized for decades, and AFAIK the existing medical literature generally finds them to be low-risk [ 3, 4]. Reply 5Avi14hPsilocybin-based psychedelics are indeed considered low-risk both in terms of addiction and overdose. This chart sums things up nicely, and is a good thing to 'pin on your mental fridge': https:// upload.wikimedia.org/wikipedia/commons/thumb/a/a5/ Drug_danger_and_dependence.svg/ 1920px-Drug_danger_and_dependence.svg.png [https:// upload.wikimedia.org/wikipedia/commons/thumb/a/a5/ Drug_danger_and_dependence.svg/ 1920px-Drug_danger_and_dependence.svg.png] You want to stay as close as possible to the bottom left corner of that graph! [-]toonalfrink8h 11 This graph shows death and addiction potential but it doesn't say anything about sanity Reply 1Avi8hCorrect - but they are low-risk for those factors (addiction and/or overdose). -4toonalfrink8hPsychedelics don't have any inherent positive or negative effect, they just make you more open to suggestion. They increase your learning rate. New evidence (i.e. your current lifestyle) will start weighing more on you than your prior (i.e. everything you've learned since you were a child). If you are in a context that promotes healthy ideas, then psychedelics will help you absorb them faster. If you are in a cult, they'll make you go off the rails faster. I take them all the time and I'm better for it, but I would never take them in Berkeley. [-]Holly_Elmore1h 16 This is not true. Some people are significantly less robust to the effects of psychedelics. Even a meditation retreat was enough to make me go off the rails-- I would never take psychedelics. But some people can't feel anything at those retreats and seem like psychedelics just open them up a bit. The same predispositions that lead people to develop schizophrenia and bipolar make them vulnerable to destabilization from psychedelics. Reply [-]Matt Goldenberg4h 15 This may be slightly overconfident. My guess is that the effects can vary wildly depending on the individual. Reply [-]Duncan_Sabien4h 32 It's definitely overconfident. Source: twenty years of listening to a wide range of stories from my mother's experiences as a mental health nurse in a psychiatric emergency room. Some of those psychedelic-related cases involved all sorts of confounding factors, and some of them just didn't. Reply [-]iceman1d 25 I want to second this. I worked for an organization where one of key support people took psychedelics and just...broke from reality. This was both a personal crisis for him and an organizational crisis for the company to deal with the sudden departure of a bus factor 1 employee. I suspect that psychedelic damage happens more often than we think because there's a whole lobby which buys the expand-your-mind narrative. Reply [-]jessicata1d 18 I don't regret having used psychedelics, though I understand why people might take what I've written as a reason not to try psychedelics. Reply [-]CronoDAS1d 11 The most horrific case I know of LSD being involved in a group's downward spiral from weird and kinda messed up to completely disconnected from reality and really fucking scary is the Manson family, but that's far from a typical example. But if you do want to be a cult leader, LSD does seem to do something that makes the job a lot easier. Reply [-]Eli Tyre15h 82 I don't want to concentrate on the question of which is "worse"; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases. I feel some annoyance at this sentence. I appreciate the stated goal of just trying to understand what happened in the different situations, without blaming or trying to evaluate which is worse. But then the post repeatedly (in every section!) makes reference to Zoe's post, comparing her experience at Leverage to your (and others') experience at MIRI/CFAR, taking specific elements from her account and drawing parallels to your own. This is the main structure of the post! Some more or less randomly chosen examples (ctrl-f "Leverage" or "Zoe" for lots more): Zoe begins by listing a number of trauma symptoms she experienced. I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break. ... Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively. This matches my experience. ... Zoe discusses an unofficial NDA people signed as they left, agree ... (read more) Reply [-]Eli Tyre15h 77 This feels especially salient because a number of the specific criticisms, in my opinion, don't hold up to scrutiny, but this is obscured by the comparison to Leverage. Like for any cultural characteristic X, there will be healthy and unhealthy versions. For instance, there are clearly good healthy versions of "having a culture of self improvement and debugging", and also versions that are harmful. For each point Zoe contends that (at least some parts of Leverage) had a destructive version, and you point out that there was a similar thing at MIRI/CFAR. And for many (but not all) of those points, I agree that there was a similar dynamic at MIRI/CFAR, and also I think that the MIRI CFAR version was much less harmful than what Zoe describes. For instance, Zoe is making the claim that (at least some parts of) Leverage had an unhealthy and destructive culture of debugging. You, Jessica, make the claim that CFAR had a similar culture of debugging, and that this is similarly bad. My current informed impression is that CFAR's self improvement culture both had some toxic elements and is/was also an order of magnitude better than what Zoe describes. Assuming that for a moment that my assess... (read more) Reply 5Vladimir_Nesov11hThis works as a general warning against awareness of hypotheses that are close to but distinct from the prevailing belief. The goal should be to make this feasible, not to become proficient in noticing the warning signs and keeping away from this. I think the feeling that this kind of argument is fair is a kind of motivated cognition that's motivated by credence. That is, if a cognitive move (argument, narrative, hypothesis) puts forward something false, there is a temptation to decry it for reasons that would prove too much, that would apply to good cognitive moves just as well if considered in their context, which credence-motivated cognition won't be doing. [-]AnnaSalamon2d 77 FWIW, the above matches my own experiences/observations/hearsay at and near MIRI and CFAR, and seems to me personally like a sensible and correct way to put it together into a parsable narrative. The OP speaks for me. (I of course still want other conflicting details and narratives that folks may have; my personal 'oh wow this puts a lot of pieces together in a parsable form that yields basically correct predictions' level is high here, but insofar as I'm encouraging anything because I'm in a position where my words are loud invitations, I want to encourage folks to share all the details/ stories/reactions pointing in all the directions.) I also have a few factual nitpicks that I may get around to commenting, but they don't subtract from my overall agreement. I appreciate the extent to which you (Jessicata) manage to make the whole thing parsable and sensible to me and some of my imagined readers. I tried a couple times to write up some bits of experience/ thoughts, but had trouble managing to say many different things A without seeming to also negate other true things A', A'', etc., maybe partly because I'm triggered about a lot of this / haven't figured out how to mesh different parts of what I'm seeing with some overall common sense, and also because I kept anticipating the same in many readers. Reply [-]AnnaSalamon2d 57 To be clear, a lot of what I find so relaxing about Jessica's post is that my experience reading it is of seeing someone who is successfully noticing a bunch of details in a way that, relative to what I'm trying to track, leaves room for lots of different things to get sorted out separately. I just got an email that led me to sort of triggeredly worry that folks will take my publicly agreeing with the OP to mean that I e.g. think MIRI is bad in general. I don't think that; I really like MIRI and have huge respect and appreciation for a lot of the people there; I also like many things about the CFAR experiment and love basically all of the people who worked there; I think there's a lot to value across this whole space. I like the detailed specific points that are made in the OP (with some specific disagreements; though also with corroborating detail I can add in various places); I think this whole "how do we make sense of what happens when people get together into groups? and what happened exactly in the different groups?" question is an unusually good time to lean on detail-tracking and reading comprehension. Reply 4LoganStrohl2d[I deleted a comment in this thread because I realized it belonged in a different thread. Just being clumsy, sry.] [-]philip_b2d 31 To my understanding, since the time when the events described in the OP took place, MIRI and CFAR have been very close and getting closer and closer. As far as I see it, nowadays CFAR is about 60% a hiring ground for MIRI and only 40% something else, though I could be wrong. Since you're one of the leaders of CFAR, that makes you one of the leading people behind all those things the OP is critical of. The OP even writes that she thought and thinks CFAR was corrupt in 2017: Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz. (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don't think this judgment was in error, though many of the following actions were; ...) Here she mentions Ziz also thinking that CFAR was corrupt, and I remember that in her blog, Ziz thought you being in the center of said corruption. So, how all is this compatible with you agreeing with the OP? Reply 4[comment deleted]2d [-]AnnaSalamon2d 55 Here is a thread for detail disagreements, including nitpicks and including larger things, that aren't necessarily meant to connect up with any particular claim about what overall narratives are accurate. (Or maybe the whole comment section is that, because this is LessWrong? Not sure.) I'm starting this because local validity semantics are important, and because it's easier to get details right if I (and probably others) can consider those details without having to pre-compute whether those details will support correct or incorrect larger claims. For me personally, part of the issue is that though I disagree with a couple of the OPs details, I also have some other details that support the larger narrative which are not included in the OP, probably because I have many experiences in the MIRI/CFAR/adjacent communities space that Jessicata doesn't know and couldn't include. And I keep expecting that if I post details without these kinds of conceptualizing statements, people will use this to make false inferences about my guesses about higher-order-bits of what happened. Reply [-]habryka2d 144 The post explicitly calls for thinking about how this situation is similar to what is happening/happened at Leverage, and I think that's a good thing to do. I do think that I do have specific evidence that makes me think that what happened at Leverage seemed pretty different from my experiences with CFAR/MIRI. Like, I've talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I've seen, and I feel like the post is trying to draw some parallel here that fails to land for me (though it's also plausible it is pointing out a higher level of information control than I thought was present at MIRI/CFAR). I have also had my disagreements with MIRI being more secretive, and think it comes with a high cost that I think has been underestimated by at least some of the leadership, but I haven't heard of people being "quarantined from their friends" because they attracted some "set of demons/bad objects that might infect others when they come into contact with them", which feels to me like a different lev... (read more) Reply [-]ChristianKl1d 12 When it comes to agreements preventing disclosure of information often there's no agreement to keep the existence of the agreement itself secret. If you don't think you can ethically (and given other risks) share the content that's protected by certain agreements it would be worthwhile to share more about the agreements and with whom you have them. This might also be accompied by a request to those parties to agree to lift the agreement. It's worthwhile to know who thinks they need to be protected by secrecy agreements. Reply [-]LoganStrohl2d 124 I, um, don't have anything coherent to say yet. Just a heads up. I also don't really know where this comment should go. But also I don't really expect to end up with anything coherent to say, and it is quite often the case that when I have something to say, people find it worthwhile to hear my incoherence anyway, because it contains things that underlay their own confused thoughts, and after hearing it they are able to un-confuse some of those thoughts and start making sense themselves. Or something. And I do have something incoherent to say. So here we go. I think there's something wrong with the OP. I don't know what it is, yet. I'm hoping someone else might be able to work it out, or to see whatever it is that's causing me to say "something wrong" and then correctly identify it as whatever it actually is (possibly not "wrong" at all). On the one hand, I feel familiarity in parts of your comment, Anna, about "matches my own experiences/observations/hearsay at and near MIRI and CFAR". Yet when you say "sensible", I feel, "no, the opposite of that". Even though I can pick out several specific places where Jessicata talked about concrete events (e.g. "I believed that I was intrinsically... (read more) Reply [-]Vladimir_Nesov1d 35 This matches my impression in a certain sense. Specifically, the density of gears in the post (elements that would reliably hold arguments together, confer local validity, or pin them to reality) is low. It's a work of philosophy, not investigative journalism. So there is a lot of slack in shifting the narrative in any direction, which is dangerous for forming beliefs (as opposed to setting up new hypotheses), especially if done in a voice that is not your own. The narrative of the post is coherent and compelling, it's a good jumping-off point for developing it into beliefs and contingency plans, but the post itself can't be directly coerced into those things, and this epistemic status is not clearly associated with it. Reply 8jessicata1dHow do you think Zoe's post, or mainstream journalism about the rationalist community (e.g. Cade Metz's article [https:// www.nytimes.com/2021/02/13/technology/ slate-star-codex-rationalists.html] , perhaps there are other better ones I don't know about) compare on this metric? Are there any examples of particularly good writeups about the community and its history you know about? 7Vladimir_Nesov1dI'm not saying that the post isn't good (I did say it's coherent and compelling), and I'm not at this moment aware of something better on its topic (though my ability to remain aware of such things is low, so that doesn't mean much). I'm saying specifically that gear density is low, so it's less suitable for belief formation than hypothesis setup. This is relevant as a more technical formulation of what I'm guessing LoganStrohl is gesturing at. I think investigative journalism is often terrible, as is philosophy, but the concepts are meaningful in characterizing types of content with respect to gear density, including high quality content. 9jessicata1dI am intending this more as contribution of relevant information and initial models than firm conclusions; conclusions are easier to reach the more different relevant information and models are shared by different people, so I suppose I don't have a strong disagreement here. 3Vladimir_Nesov1dSure, and this is clear to me as a practitioner of the yoga of taking in everything only as a hypothesis/narrative, mining it for gears, and separately checking what beliefs happen to crystallize out of this, if any. But for someone who doesn't always make this distinction, not having a clear indication of the status of the source material needlessly increases epistemic hygiene risks, so it's a good norm to make epistemic status of content more legible. My guess is that LoganStrohl's impression is partly of violation of this norm (which I'm not even sure clearly happened), shared by a surprising number of upvoters. 9jessicata1dDo you predict Logan's comment would have been much different if I had written "[epistemic status: contents of memory banks, arranged in a parseable semicoherent narrative sequence, which contains initial models that seem to compress the experiences in a Solomonoff sense better than alternative explanations, but which aren't intended to be final conclusions, given that only a small subset of the data has been revealed and better models are likely to be discovered in the future]"? I think this is to some degree implied by the title which starts with "My experience..." so I don't think this would have made a large difference, although I can't be sure about Logan's counterfactual comment. 3Vladimir_Nesov1dI'm not sure, but the hypothesis I'm chasing in this thread, intended as a plausible steelman of Logan's comment, thinks so. One alternative that is also plausible to me is motivated cognition that would decry undesirable source material for low gear density, and that one predicts little change in response to more legible epistemic status. 4jessicata1dI expect the alternative hypothesis to be true given the difference between the responses to this post and Zoe's post. 7Benquo1dThis reads like you feel compelled to avoid parsing the content of the OP, and instead intend to treat the criticisms it makes as a Lovecraftian horror the mind mustn't engage with. Attempts to interpret this sort of illegible intent-to-reject as though it were well-intentioned criticism end up looking like: Very helpful to have a crisp example of this in text. ETA: I blanked out the first few times I read Jessica's post on anti-normativity [https:// unstableontology.com/2021/04/12/on-commitments-to-anti-normativity/], but interpreted that accurately as my own intent to reject the information rather projecting my rejection onto the post itself, treated that as a serious problem I wanted to address, and was able to parse it after several more attempts. [-]Duncan_Sabien1d 13 I understood the first sentence of your comment to be something like "one of my hypotheses about Logan's reaction is that Logan has some internal mental pressure to not-parse or not-understand the content of what Jessica is trying to convey." That makes sense to me as a hypothesis, if I've understood you, though I'd be curious for some guesses as to why someone might have such an internal mental pressure, and what it would be trying to accomplish or protect. I didn't follow the rest of the comment, mostly due to various words like "this" and "it" having ambiguous referents. Would you be willing to try everything after "attempts" again, using 3x as many words? Reply [-]Benquo1d 21 Summary: Logan reports a refusal to parse the content of the OP. Logan locates a problem nonspecifically in the OP, not in Logan's specific reaction to it. This implies a belief that it would be bad to receive information from Jessica. Logan reports a refusal to parse the content of the OP But then, "the people most mentally concerned" happens, and I'm like, Which people were most mentally concerned? What does it mean to be mentally concerned? How could the author tell that those people were mentally concerned? Then we have "with strange social metaphysics", and I want to know "what is social metaphysics?", "what is it for social metaphysics to be strange or not strange?" and "what is it to be mentally concerned with strange social metaphysics"? Next is "were marginalized". How were they marginalize? What caused the author to believe that they were marginalized? What is it for someone to be marginalized? Most of this isn't even slightly ambiguous, and Jessica explains most of the things being asked about, with examples, in the body of the post. Logan locates a nonspecific problem in the OP, not in Logan's response to it. I just, also have this feeling like something... isn't just wrong h ... (read more) Reply [-]Viliam1d 110 I also don't know what "social metaphysics" means. I get the mood of the story. If you look at specific accusations, here is what I found, maybe I overlooked something: there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis. There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one. There are even cases of suicide in the Berkeley rationality community [...] associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years. MIRI became very secretive about research. Many researchers were working on secret projects, and I learned almost nothing about these. I and other researchers were told not ... (read more) Reply [-]Eli Tyre19h 15 This comment was very helpful. Thank you. Reply 8Duncan_Sabien1dThanks for the expansion! Mulling. -7farp1d [-]habryka1d 117 I feel like one really major component that is missing from the story above, in particular a number of the psychotic breaks, is to mention Michael Vassar and a bunch of the people he tends to hang out with. I don't have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael. I think this is important because Michael has I think a very large psychological effect on people, and also has some bad tendencies to severely outgroup people who are not part of his very local social group, and also some history of attacking outsiders who behave in ways he doesn't like very viciously, including making quite a lot of very concrete threats (things like "I hope you will be guillotined, and the social justice community will find you and track you down and destroy your life, after I do everything I can to send them onto you"). I personally have found those threats to very drastically increase the stress I experience from inter... (read more) Reply [-]jessicata1d 17 I don't have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael. Of the 4 hospitalizations and 1 case of jail time I know about, 3 of those hospitalized (including me) were talking significantly with Michael, and the others weren't afaik (and neither were the 2 suicidal people), though obviously I couldn't know about all conversations that were happening. Michael wasn't talking much with Leverage people at the time. I hadn't heard of the statement about guillotines, that seems pretty intense. I talked with someone recently who hadn't been in the Berkeley scene specifically but who had heard that Michael was "mind-controlling" people into joining a cult, and decided to meet him in person, at which point he concluded that Michael was actually doing some of the unique interventions that could bring people out of cults, which often involves causing them to notice things they're looking away from. It's common for t... (read more) Reply [-]habryka1d 66 IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred. Though someone else might have better info here and should correct me if I am wrong. I don't know of any 4th case, so I believe you that they didn't have much to do with Michael. This makes the current record 4/5 to me, which sure seems pretty high. Michael wasn't talking much with Leverage people at the time. I did not intend to indicate Michael had any effect on Leverage people, or to say that all or even a majority of the difficult psychological problems that people had in the community are downstream of Michael. I do think he had a large effect on some of the dynamics you are talking about in the OP, and I think any picture of what happened/is happening seems very incomplete without him and the associated social cluster. I think the part about Michael helping people notice that they are in some kind of bad environment seems plausible to me, though doesn't have most of my probability mass (~15%), and most of my probability mass (~60%) is indeed that Michael mostly just leverages the same mechanisms for building a pretty abusive and cult-like ingroup... (read more) Reply [-]Andrew Rettek1d 30 IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred I was pretty involved in that case after the arrest and for several months after and spoke to MV about it, and AFAICT that person and Michael Vassar only met maybe once casually. I think he did spend a lot of time with others in MV's clique though. Reply 6habryka21hAh, yeah, my model is that the person had spent a lot of time with MV's clique, though I wasn't super confident they had talked to Michael in particular. Not sure whether I would still count this as being an effect of Michael's actions, seems murkier than I made it out to be in my comment. [-]jessicata1d 15 I think one of the ways of disambiguating here is to talk to people outside your social bubble, e.g. people who live in different places, people with different politics, people in different subcultures or on different websites (e.g. Twitter or Reddit), people you run into in different contexts, people who have had experience in different mainstream institutions (e.g. different academic departments, startups, mainstream corporations). Presumably, the more of a culty bubble you're in, the more prediction error this will generate, and the harder it will be establish communication protocols across the gap. This establishes a point of comparison between people in bubble A vs B. I spent a long part of the 2020 quarantine period with Michael and some friends of his (and friends of theirs) who were previously in a non-bay-area cult, which exposed me to a lot of new perspectives I didn't know about (not just theirs, but also those of some prison reform advocates and religious people), and made Michael seem less extremal or insular in comparison, since I wasn't just comparing him to the bubble of people who I already knew about. Reply 4habryka1dHmm, I've tried to read this comment for something like 5 minutes, but I can't really figure out its logical structure. Let me give it a try in a more written format: Presumably this is referring to distinguishing the hypothesis that Michael is kind of causing a bunch of cult-like problems, from the hypothesis that he helping people see problems that are actually present. I don't understand this part. Why would there be a monotonous relationship here? I agree with the bubble part, and while I expect there to be a vague correlation, it doesn't feel like it measures anything like the core of what's going on. I wouldn't measure the cultishness of an economics department based on how good they are at talking to improv-students. It might still be good for them to get better at talking to improv students, but failure to do so doesn't feel like particularly strong evidence to me (compared to other dimensions, like the degree to which they feel alienated from the rest of the world, or have psychotic breaks, or feel under a lot of social pressure to not speak out, or many other things that seem similarly straightforward to measure but feel like they get more at the core of the thing). But also, I don't understand how I am supposed to disambiguate things here? Like, maybe the hypothesis here is that by doing this myself I could understand how insular my own environment is? I do think that seems like a reasonable point of evidence, though I also think my experiences have been very different from people at MIRI or CFAR. I also generally don't have a hard time establishing communication protocols across these kinds of gaps, as far as I can tell. This is interesting, and definitely some evidence, and I appreciate you mentioning it. [-]jessicata1d 10 If you think the anecdote I shared is evidence, it seems like you agree with my theory to some extent? Or maybe you have a different theory for how it's relevant? E.g. say you're an econ student, and there's this one person in the econ department who seems to have all these weird opinions about social behavior and think body language is unusually important. Then you go talk to some drama students and find that they have opinions that are even more extreme in the same direction. It seems like the update you should make is that you're in a more insular social context than the person with opinions on social behavior, who originally seemed to you to be in a small bubble that wasn't taking in a lot of relevant information. (basically, a lot of what I'm asserting constitutes "being in a cult" is living in a simulation of an artificially small, closed world) Reply 6habryka1dThe update was more straightforward, based on "I looked at some things that are definitely cults, what Michael does seems less extremal and insular in comparison, therefore it seems less likely for Michael to run into the same problems". I don't think that update required agreeing with your theory to any substantial degree. I do think your paragraph still clarified things a bit for me, though with my current understanding, presumably the group to compare yourself against are less cults, and more just like, average people who are somewhat further out on some interesting dimension. And if you notice that average people seem really crazy and cult-like to you, then I do think this is something to pay attention to (though like, average people are also really crazy on lots of topics, like schooling and death and economics and various COVID related things that I feel pretty confident in, and so I don't think this is some kind of knockdown argument, though I do think having arrived at truths that large fractions of the population don't believe definitely increase the risks from insularity). 4jessicata1dI definitely don't want to imply that agreement with the majority is a metric, rather the ability to have a discussion at all, to be able to see part of the world they're seeing and take that information into account in your own view (which might be called "interpretive labor" or "active listening"). 3habryka1dAgree. I do think the two are often kind of entwined (like, I am not capable of holding arbitrarily many maps of the world in my mind at the same time, so when I arrive at some unconventional beliefs that has broad consequences, the new models based on that belief will often replace more conventional models of the domain, and I will have to spend time regenerating the more conventional models and beliefs in conversation with someone who doesn't hold the unconventional belief, which does frequently make the conversation kind of harder, and I still don't think is evidence of something going terribly wrong) 4jessicata4hOh, something that might not have been clear is that talking with other people Michael knows made it clear that Michael was less insular than MIRI/CFAR people (who would have been less able to talk with such a diverse group of people, afaict), not just that he was less insular than people in cults. 7Ben Pace1dDo you know if the 3 people who were talking significantly with Michael did LSD at the time or with him? Erm... feel free to keep plausible deniability. Taking LSD seems to me like a pretty worthwhile thing to do in lots of contexts and I'm willing to put a substantial amount of resources to defending against legal attacks (or supporting you in the face of them) that are caused by you replying openly here. (I don't know if that's plausible, I've not thought about it much so mentioned it anyway.) 7jessicata1dI had taken a psychedelic previously with Michael; one other person probably had; the other probably hadn't; I'm quite unsure of the latter two judgments. I'm not going to disambiguate about specific drugs. 2Chris_Leong21hWhat kinds of things was he attacking people for? [-]habryka21h 12 I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above). I think in that case the thing he is attacking him for is leveraging people's desire to be a morally good person in a way that they don't endorse (and plays into various guilt narratives), to get them to give him money, and to get them to dedicate their life towards Effective Altruism, and via that technique, preventing a substantial fraction of the world's top talent to dedicate themselves towards actually important problems, and also causing them various forms of psychological harm. Reply -5Gunnar_Zarncke10h [-]AnnaSalamon2d 25 I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said "yes". This part of the plan was the same. Re: "this part of the plan was the same": IMO, some at CFAR were interested in helping some subset of people become Elon Musk, but this is different from the idea that everyone is supposed to become Musk and that that is the plan. IME there was usually mostly (though not invariably, which I expect led to problems; and for all I know "usually" may also have been the case in various parts and years of Leverage) acceptance for folks who did not wish to try to change themselves much. Reply [-]Eli Tyre20h 13 Yeah, I very strongly don't endorse this as a description of CFAR's activities or of CFAR's goals, and I'm pretty surprised to hear that someone at CFAR said something like this (unless it was Val, in which case I'm less surprised). Most of my probability mass is on the CFAR instructor was taking "become Elon Musk" to be a sort of generic, hyperbolic term for "become very capable." Reply 9jessicata20hThe person I asked was Duncan. I suggested the "Elon Musk" framing in the question. I didn't mean it literally, I meant him as an archetypal example of an extremely capable person. That's probably what was meant at Leverage too. 9Duncan_Sabien18hI do not doubt Jessica's report here whatsoever. I also have zero memory of this, and it is not the sort of sentiment I recall holding in any enduring fashion, or putting forth elsewhere. I suspect I intended my reply pretty casually/metaphorically, and would have similarly answered "yes" if someone had asked me if we were trying to improve ourselves to become any number of shorthand examples of "happy, effective, capable, and sane." 2016 Duncan apparently thought more of Elon Musk than 2021 Duncan does. 2Gunnar_Zarncke20mRelated Tweet [https://twitter.com/webdevMason/ status/1450175267909947395] by Mason: 7Viliam1dOkay, here goes the nitpicking... I am confused, because I assumed that Kegan stages are typically used by people who believe they are superior to LW-style rationalists. You know, "the rationalists believe in objective reality, so they are at Kegan level 4, while I am a post-rationalist who respects deep wisdom and religion, so I am at Kegan level 5." [-]jessicata1d 12 Here are some examples of long-time LW posters who think Kegan stages are important: * Kaj Satala * G. Gordon Worley III * Malcolm Ocean Though I can't find an example of him posting on LessWrong, Ethan Dickinson is in the Berkeley rationality community and is mentioned here as introducing people to Kegan stages. There are multiple others, these are just the people who it was easy to find Internet evidence about. There's a lot of overlap in people posting about "rationalism" and "postrationalism", it's often a matter of self-identification rather than actual use of different methods to think, e.g. lots of "rationalists" are into meditation, lots of "postrationalists" use approximately Bayesian analysis when thinking about e.g. COVID. I have noticed that "rationalists" tend to think the "rationalist/ postrationalist" distinction is more important than the "postrationalists" do; "postrationalists" are now on Twitter using vaguer terms like "ingroup" or "TCOT" (this corner of Twitter) for themselves. I also mentioned a high amount of interaction between CFAR and Monastic Academic in the post. Reply [-]Duncan_Sabien1d 49 To speak a little bit on the interaction between CFAR and MAPLE: My understanding is that none of Anna, Val, Pete, Tim, Elizabeth, Jack, etc. (current or historic higher-ups at CFAR) had any substantial engagement with MAPLE. My sense is that Anna has spoken with MAPLE people a good bit in terms of total hours, but not at all a lot when compared with how many hours Anna spends speaking to all sorts of people all the time--much much less, for instance, than Anna has spoken to Leverage folks or CEA folks or LW folks. I believe that Renshin Lee (nee Lauren) began substantially engaging with MAPLE only after leaving their employment at CFAR, and drew no particular link between the two (i.e. was not saying "MAPLE is the obvious next step after CFAR" or anything like that, but rather was doing what was personally good for them). I think mmmmaybe a couple other CFAR alumni or people-near-CFAR went to MAPLE for a meditation retreat or two? And wrote favorably about that, from the perspective of individuals? These (I think but do not know for sure) include people like Abram Demski and Qiaochu Yuan, and a small number of people from CFAR's hundreds of workshop alumni, some of w... (read more) Reply [-]Unreal1d 38 This is Ren, and I was like "?!?" by the sentence in the post: "There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy." I am having trouble engaging with LW comments in general so thankfully Duncan is here with #somefacts. I pretty much agree with his list of informative facts. More facts: * Adom / Quincy did a two-month apprenticeship at MAPLE, a couple years after being employed by CFAR. He and I are the only CFAR employees who've trained at MAPLE. * CFAR-adjacent people visit MAPLE sometimes, maybe for about a week in length. * Some CFAR workshop alums have trained at MAPLE or Oak as apprentices or residents, but I would largely not call them "people who worked with or at CFAR." There are a lot of CFAR alums, and there are also a lot of MAPLE alums. * MAPLE and Oak have applied for EA grants in the past, which have resulted in them communicating with some CFAR-y type people like Anna Salamon, but this does not feel like a central example of "interaction" of the kind implied. The inferential gap between the MAPLE and rationalist worldview is pretty large. There's definitely an interesting "thing"... (read more) Reply [-]jessicata1d 11 Thanks, this adds helpful details. I've linked this comment in the OP. Reply 9Eli Tyre21hAs someone who was more involved with CFAR than Duncan was from in 2019 on, all this sounds correct to me. 8habryka1dI was also planning to leave a comment with a similar take. 6habryka1dThere was a period in something like 2016-2017 some rationalists were playing around with Kegan stages in the Bay Area. Most people I knew weren't a huge fan of them, though the then-ED of CFAR (Pete Michaud) did have a tendency of bringing them up from time to time in a way I found quite annoying. It was a model a few people used from time to time, though my sense is that it never got much traction in the community. The "often" in the above quoted sentence definitely feels surprising to me, though I don't know how many people at MIRI were using them at the time, and maybe it was more than in the rest of my social circle at time. I still hear them brought up sometimes, but usually in a pretty subdued way, more referencing the general idea of people being able to place themselves in a broader context, but in a much less concrete and less totalizing way than the way I saw them being used in 2016-2017. 8Holly_Elmore1dI was very peripheral to the Bay Area rationality at that time and I heard about Kegan levels enough to rub me the wrong way. Seemed bizarre to me that one man's idiosyncratic theory of development would be taken so seriously by a community I generally thought was more discerning. That's why I remember so clearly that it came up many times. 3Linch19h+1, except I was more physically and maybe socially close. 8Vaniver1dFWIW I think this understates the influence of Kegan levels. I don't know how much people did differently because of it, which is maybe what you're pointing at, but it was definitely a thing people had heard of and expected other people to have heard of and some people targeted directly. [-]habryka1d 11 Huh, some chance I am just wrong here, but to me it didn't feel like Kegan levels had more prominence or expectation of being understood than e.g. land value taxes, which is also a topic some people are really into, but doesn't feel to me like it's very core to the community. Reply 7Ben Pace1dDatapoint: I understand neither Kegan levels nor land value taxes. [-]Aryeh Englander11h 54 I see that many people are commenting how it's crazy to try to keep things secret between coworkers, or to not allow people to even mention certain projects, or that this kind of secrecy is psychologically damaging, or the like. Now, I imagine this is heavily dependent on exactly how it's implemented, and I have no idea how it's implemented at MIRI. But just as a relevant data point - this kind of secrecy is totally par for the course for anybody who works for certain government and especially military-related organizations or contractors. You need extensive background checks to get a security clearance, and even then you can't mention anything classified to someone else unless they have a valid need to know, you're in a secure classified area that meets a lot of very detailed guidelines, etc. Even within small groups, there are certain projects that you simply are not allowed to discuss with other group members, since they do not necessarily have a valid need to know. If you're not sure whether something is classified, you should be talking to someone higher up who does know. There are projects that you cannot even admit that they exist, and there are even words that you cannot men... (read more) Reply [-]Zack_M_Davis2d 54 I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects. Trying to maintain secrecy within the organization like this (as contrasted to secrecy from the public) seems nuts to me. Certainly, if you have any clever ideas about how to build an AGI, you wouldn't want to put them on the public internet, where they might inspire someone who doesn't appreciate the difficulty of the alignment problem to do something dangerous. But one would hope that the people working at MIRI do appreciate the difficulty of the alignment problem (as a real thing about the world, and not just something to temporarily believe because your current employer says so). If you want the alignment-savvy people to have an edge over the rest of the world (!), you should want them to be maximally intellectually productive, which naturally requires the ability to talk to each other without the overhead of seeking permission from a designated authority figure. (Where the standard practice of bottlenecking information and decisionmaking on a designated authority figure makes sense if you're a government or a corporation trying to wrangle people into serving the needs of the organization against their own interests, but I didn't think "we" were operating on that model.) Reply [-]Eliezer Yudkowsky1d 59 Secrecy is not about good trustworthy people who get to have all the secrets versus bad untrustworthy people who don't get any. This frame may itself be part of the problem; a frame like that makes it incredibly socially difficult to implement standard practices. Reply [-]Davidmanheim16h 58 To attempt to make this point more legible: Standard best practice in places like the military and intelligence organizations, where lives depend on secrecy being kept from outsiders - but not insiders - is to compartmentalize and maintain "need to know." Similarly, in information security, the best practice is to only give being security access to what they need, and granularize access to different services / data, and well as differentiating read / write / delete access. Even in regular organizations, lots of information is need-to-know - HR complaints, future budgets, estimates of profitability of a publicly traded company before quarterly reports, and so on. This is normal, and even though it's costly, those costs are needed. This type of granular control isn't intended to stop internal productivity, it is to limit the extent of failures in secrecy, and attempts to exploit the system by leveraging non-public information, both of which are inevitable, since costs to prevent failures grow very quickly as the risk of failure approaches zero. For all of these reasons, the ideal is to have trustworthy people who have low but non-zero probabilities of screwing up on secrecy. Then, you ask them not to share things that are not necessary for others' work. You only allow limited exceptions and discretion where it is useful. The alternative, of "good trustworthy people [] get to have all the secrets versus bad untrustworthy people who don't get any," simply doesn't work in practice. Reply [-]Zack_M_Davis5h 11 Thanks for the explanation. (My comment was written from my idiosyncratic perspective of having been frequently intellectually stymied by speech restrictions, and not having given much careful thought to organizational design.) Reply 2ChristianKl15hI would imagine that most military and intelligence organziation do have psychiatrists and therapists on staff that employees can access when they run into psychological trouble due to their work projects where they can share information about their work project. Especially, when operating in an envirioment that does get people in contact with issues that caused some people to be institutionalized having only a superior to share information but not anybody do deal with the psychological issues arrising from the work seems like a flawed system. [-]Davidmanheim7h 11 I agree that there is a real issue here that needs to be addressed, and I wasn't claiming that there is no reason to have support - just that there is a reason to compartmentalize. And yes, US military use of mental health resources is off-the-charts . But in the intelligence community there are some really screwed up incentives, in that having a mental health issue can get your clearance revoked - and you won't necessarily lose your job, but the impact on a person's career is a great reason to avoid mental health care, and my (second-hand, not reliable) understanding is that there is a real problem with this. Reply [-]anon035h 24 Seconding this: When I did classified work at a USA company, I got the strong impression that (1) If I have any financial problems or mental health problems, I need to tell the security office immediately; (2) If I do so, the security office would immediately tell the military, and then the military would potentially revoke my security clearance. Note that some people get immediately fired if they lose their clearance. That wasn't true for me--but losing my clearance would have certainly hurt my future job prospects. My strong impression was that neither the security office nor anyone else had any intention to help us employees with our financial or mental health problems. Nope, their only role was to exacerbate personal problems, not solve them. There's an obvious incentive problem here; why would anyone disclose their incipient financial or mental health problems to the company, before they blow up? But I think from the company's perspective, that's a feature not a bug. :-P (As it happens, neither myself nor any of my close colleagues had financial or mental health problems while I was working there. So it's possible that my impressions are wrong.) Reply [-]Davidmanheim5h 17 I don't specifically know about mental health, but I do know specific stories about financial problems being treated as security concerns - and I don't think I need to explain how incredibly horrific it is to have an employee say to their employer that they are in financial trouble, and be told that they lost their job and income because of it. Reply [-]Vaniver2d 45 I didn't think "we" were operating on that model. I think it's actually quite hard to have everyone in an organization trust everyone else in an organization, or to only hire people who would be trusted by everyone in the organization. So you might want to have some sort of tiered system, where (perhaps) the researchers all trust each other, but only trust the engineers they work with, and don't trust any of the ops staff, and this lets you only need one researcher to trust an engineer to hire them. [On net I think the balance is probably still in favor of "internal transparency, gated primarily by time and interests instead of security clearance", but it's less obvious than it originally seems.] Reply [-]Vladimir_Nesov1d 26 The steelman that comes to mind is that by the time you actually know that you have a dangerous secret, it's either too late or risky to set up a secrecy policy. So it's useful to install secrecy policies in advance. The downsides that might be currently apparent are bugs that you still have the slack to resolve. Reply [-]Chris_Leong1d 17 It depends. For example, if you have an intern program, then they probably aren't especially trusted as these decision generally don't receive the same degree of scrutiny as employment. And ops people prob don't need to know details of the technical research. Reply [-]ChristianKl16h 16 In case it becomes known to any of a few powerful intelligence agencies that MIRI works on an internal project that they believe is likely to create an AGI in one or two years, that intelligence agency will hack/surveil MIRI to get all the secrets. To the extend that MIRI's theory of change is that they are going to build an AGI on their own independent of any outside organization a high degree of secrecy is likely necessary for that plan to work. I think it's highly questionable that MIRI will be able to develop AGI faster (especially when researchers don't talk to each other) then organizations like Deep Mind and thus it's unclear to me whether the plan makes sense, but it seems hard to imagine that plan without secrecy. Reply [-]temporary_visitor_account6h 50 I want to provide an outside view that people might find helpful. This is based on my experience as a high school teacher (6 months total experience), a professor at an R1 university (eight years total experience), and someone who has mentored extraordinarily bright early-career scientists (15 years experience). It's very clear to me that the rationalist community is acting as a de facto school and system of interconnected mentorship opportunities. In some cases (CFAR, e.g.) this is explicit. Academia also does this. It has ~1000 years of experience, dating from the founding of the University of Cambridge, and has learned a few things in that time. An important discovery is that there are serious responsibilities that come with attending on "young" minds (young in quotes; generically the first quarter of life, depending on era, that's <15 up to today around <30). These minds are considered inherently vulnerable, who need to be protected from manipulation, boundary violations, etc. It's been discovered that making this a blanket and non-negotiable rule has significant positive epistemic and moral effects that haven't been replicated with alternatives. Even before academic institu... (read more) Reply [-]Unreal7h 45 Attempt to get shared models on "Variations in Responses": Quote from another comment by Mr. Davis Kingsley: My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren't really the main thing. I bid: This counts as counter-evidence, but it's unfortunately not very strong counter-evidence. Or at least it's weaker than one might naively believe. Why? It is true of many groups that even while most of a group's activities or even the main point of a group's activities might be wholesome, above board, above water, beneficial, etc., it is possible that this is still secretly enabling the abuse of a silent or hidden minority. The minority that, in the end, is going to be easiest to dismiss, ridicule, or downplay. It might even be only ONE person who takes all the abuse. I think this dynamic is so fucked that most people don't want to admit that it's a real thing. How can a community or group that is mostly wholesome and good and happy be hiding atrocious skeletons in their closet? (Not that this is true of CFAR or MIRI, I'm not making that claim. I do get a 'vibe' from Zoe's post that it's what Leverage 1.0 migh... (read more) Reply [-]Vaniver5h 47 On the other side of it, why do people seem TOO DETERMINED to turn him into a scapegoat? Most of you don't sound like you really know him at all. A blogger I read sometimes talks about his experience with lung cancer (decades ago), where people would ask his wife "so, he smoked, right?" and his wife would say "nope" and then they would look unsettled. He attributed it to something like "people want to feel like all health issues are deserved, and so their being good / in control will protect them." A world where people sometimes get lung cancer without having pressed the "give me lung cancer" button is a scarier world where the only way to get it is pressing the button. I think there's something here where people are projecting all of the potential harm onto Michael, in a way that's sort of fair from a 'driving their actions' perspective (if they're worried about the effects of talking to him, maybe they shouldn't talk to him), but which really isn't owning the degree to which the effects they're worried about are caused by their instability or the them-Michael dynamic. [A thing Anna and I discussed recently is, roughly, the tension between "telling the truth" and "not destabilizing the current regime"; I think it's easy to see there as being a core disagreement about whether or not it's better to see the way in which the organizations surrounding you are ___, and Michael is being thought of as some sort of pole for the "tell the truth, even if everything falls apart" principle.] Reply [-]Unreal5h 11 +1 to your example and esp "isn't owning the degree to which the effects they're worried about are caused by their instability or the them-Michael dynamic." I also want to leave open the hypothesis that this thing isn't a one-sided dynamic, and Michael and/or his group is unintentionally contributing to it. Whereas the lung cancer example seems almost entirely one-sided. Reply 5Unreal6hSorry if my tone about "something slippery" was way too confronting. I have simultaneously a lot of compassion and a lot of faith in people's ability to 'handle difficult truths' or something like that. But that nuanced tone is hard to get across on the internet. If you feel negatively impacted by my comment here, you are welcome to challenge me or confront me about it here or elsewhere. [-]lwanon7h 38 I don't live in the Bay anymore and haven't been on LessWrong for a while, but was informed of this thread by a friend. I have only one thing to say, and will not be commenting any further due to an NDA. Stay away from Geoff Anders and whatever nth iteration of "Leverage" he's on now. Reply 5Freyja5hYou might not be able to say this, but I'm wondering whether it's one of the NDAs Zoe references Geoff pressuring people to sign at the end of Leverage 1.0 in 2019, [-]Unreal1d 34 I am impressed and appreciative towards Logan for trying to say things on this post despite not being very coherent. I am appreciative and have admiration towards Anna for making sincere attempts to communicate out of a principled stance in favor of information sharing. I am surprised and impressed by Zoe's coherence on a pretty triggering and nuanced subject. I enjoy hearing from jessicata, and I appreciate the way her mind works; I liked this post, and I found it kind of relieving. I am a bit crestfallen at my own lack of mental skillfulness in response to reading posts like this one. While this feels like a not-very-LW-y way to go about things, I will just try to make a list... of .... things ... * I don't like LW's discussion norms or the structure of its website. I think it favors left-brained non-indexical dialogue--which I believe compounds on many of the major problems of science today. I want to holistically appreciate the role of emotions, intuitions, physical sensations, facial expressions, felt senses, identity, and background context on truth-seeking. LW feels like it wants to strip that away or makes it very hard to bring them in. I don't blame LW, its creators, ... (read more) Reply [-]LoganStrohl1d 10 oh man sounds like we have a really similar relationship with LW for the same reasons Reply [-]Chris_Leong2d 34 I am very sorry to hear about your experiences. I hope you've found peace and that the organizations can take your experiences on board. On one hand you seem to want there to be more open discussion around mental health, whilst on the other you are criticising MIRI and CFAR for having people have mental health issues in their orbit. These seem somewhat in tension with each other. I think one of the factors is that the mission itself is stressful. For example, air traffic control and the police are high stress careers, yet we need both. Another issue is that rationality is in some ways more welcoming of (at least some subset) people whom society would seem weird. Especially since certain conditions can be paired with great insight or drive. It seems like the less a community appreciates the silver-lining of mental health issues, the better they'd score according to your metric. Regarding secrecy, I'd prefer for AI groups to lean too much on the side of maintaining precautions about info-hazards than too little. (I'm only referring to technical research not misbehaviour). I think it's perfectly valid for donors to decide that they aren't going to give money without transparency, but ther... (read more) Reply [-]jessicata2d 16 I am very sorry to hear about your experiences. I hope you've found peace and that the organizations can take your experiences on board. Thanks, I appreciate the thought. On one hand you seem to want there to be more open discussion around mental health, whilst on the other you are criticising MIRI and CFAR for having people have mental health issues in their orbit. These seem somewhat in tension with each other. I don't see why these would be in tension. If there is more and better discussion then that reduces the chance of bad outcomes. (Partially, I brought up the mental health issues because it seemed like people were criticizing Leverage for having people with mental health issues in their orbit, but it seems like Leverage handled the issue relatively well all things considered.) I think one of the factors is that the mission itself is stressful. For example, air traffic control and the police are high stress careers, yet we need both. I basically agree. It seems like the less a community appreciates the silver-lining of mental health issues, the better they'd score according to your metric. I don't think so. I'm explicitly saying that talking about weird perception... (read more) Reply [-]Said Achmiz2d 27 One point I strongly agree with you on is that rationalists should pay more attention to philosophy. Yes, I've definitely noticed a trend where rationalists are mostly continuing from Hume and Turing, neglecting e.g. Kant as a response to Hume. I've yet to see a readable explanation of what Kant had to say (in response to Hume or otherwise) that's particularly worth paying attention to (despite my philosophy classes in college having covered Kant, and making some attempts later to read him). If you (or someone else) were to write an LW post about this, I think this might be of great benefit to everyone here. Reply [-]Rob Bensinger21h 21 I don't know what Kant-insights Jessica thinks LW is neglecting, but I endorse Allen Wood's introduction to Kant as a general resource. (Partly because Wood is a Kant scholar who loves Kant but talks a bunch about how Kant was just being sloppy / inconsistent in lots of his core discussions of noumena, rather than assuming that everything Kant says reflects some deep insight. This makes me less worried about IMO one of the big failure modes of philosopher-historians, which is that they get too creative with their novel interpretations + treat their favorite historical philosophers like truth oracles.) BTW, when it comes to transcendental idealism, I mostly think of Arthur Schopenhauer as 'Kant, but with less muddled thinking and not-absolutely-horrible writing style'. So I'd usually rather go ask what Schopenhauer thought of a thing, rather than what Kant thought. (But I mostly disagree with Kant and Schopenhauer, so I may be the wrong person to ask about how to properly steel-man Kant.) Reply [-]LoganStrohl2d 21 I've yet to see a readable explanation of what Kant had to say (in response to Hume or otherwise) that's particularly worth paying attention to As an undergrad, instead of following the actual instructions and writing a proper paper on Kant, I thought it would be more interesting and valuable to simply attempt to paraphrase what he actually said, paragraph by paragraph. It's the work of a young person with little experience in either philosophy or writing, but it certainly seems to have had a pretty big influence on my thinking over the past ten years, and I got an A. So, mostly for your entertainment, I present to you "Kant in [really not nearly as plain as I thought at the time] English". (It's just the bit on apperception.) Reply [-]FeepingCreature1d 13 I think this is either basic psychology or wrong.1 For one, Kant seems to be conflating the operation of a concept with its perception: Since the concept of "unity" must exist for there to be combination (or "conjunction") in the first place, unity can't come from combination itself. The whole-ness of unified things must be a product of something beyond combination. This seems to say that the brain cannot unify things unless it has a concept of combination. However, just as an example, reinforcement learning in AI shows this to be false: unification can happen as a mechanistic consequence of the medium in which experiences are embedded, and an understanding of unification - a perception as a concept - is wholly unnecessary. Then okay, concepts are generalizations (compressions?) of sense data, and there's an implied world of which we become cognizant by assuming that the inner structure matches the outer structure. So far, so Simple Idea Of Truth. But then he does the same thing again with "unity", where he assumes that persistent identity-perception is necessary for judgment. Which I think any consideration of a nematode would disprove: judgment can also happen mechanistically. I ... (read more) Reply 6Said Achmiz2dInteresting, thanks--I will read this ASAP! 4shminux2dIf you manage to get through that, maybe you can summarize it? Even Logan's accessible explanation makes my eyes glaze over. 7Said Achmiz1dI got through that page and... no, I really can't summarize it. I don't really have any idea what Kant is supposed to have been saying, or why he said any of those things, or the significance of any of it... I'm afraid I remain as perplexed as ever. [-]jessicata21h 15 I've been working on a write-up on and off for months, which I might or might not ever get around to finishing. The basic gist is that, while Hume assumes you have sense-data and are learning structures like causation from this sense-data, Kant is saying you need concepts of causation to have sense-data at all. The Transcendental Aesthetic is a pretty simple argument if applied to Solomonoff induction. Suppose you tried to write an AI to learn about time, which didn't already have time. How would it structure its observations, so it could learn about time from these different observations? That seems pretty hard, perhaps not really possible, since "learning" implies past observations affecting how future observations are interpreted. In Solomonoff induction there is a time-structure built in, which structures observations. That is, the inductor assumes a priori that its observations are structured in a sequence. Kant argues that space is also a priori this way. This is a somewhat suspicious argument given that vanilla Solomonoff induction doesn't need a priori space to structure its observations. But maybe it's true in the case of humans, since our visual cortexes have a notion o... (read more) Reply 4Said Achmiz18hHmm. Both of these ideas seem very wrong (though Kant's, perhaps, more so). Is there anything else of value? If this (and similar things) are all that there is, then maybe rationalists are right to mostly ignore Kant... 2Chris_Leong1dI appreciate Kant's idea that certain things may arise from how we see and interpret the world. I think it's plausible that this is an accurate how level description of things like counterfactuals and probability. (I'm a bit busy atm so I haven't provided much detail) 2Jonathan_Graehl19h'how-level' would be easier to parse [-]Zack_M_Davis2d 21 there's a big difference between saying it saves a few years vs. causes us to have a chance at all when we otherwise wouldn't. [...] it seems like most of the relevant ideas were already in the memespace I was struck by the 4th edition of AI: A Modern Approach quoting Nobert Weiner writing in 1960 (!), "If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire." It must not have seemed like a pressing issue in 1960, but Weiner noticed the problem! (And Yudkowsky didn't notice, at first.) How much better off are our analogues in the worlds where someone like Weiner (or, more ambitiously, Charles Babbage) did treat it as a pressing issue? How much measure do they have? Reply 1Chris_Leong1dYeah, it's quite plausible that it might have taken another decade (then again I don't know if Bostrom thought super-intelligence was possible before encountering Eliezer) 3Chris_Leong1dI guess my point was that a community that excludes anyone who has mental health issues would score well on your metric, while a community that is welcoming would score poorly. Another possiblity is that they might be comparing their ability to form a correct philosophical opinion. This isn't the same as raw knowledge, but I suspect that our epistemic position makes it much easier. Not only because of more information, but also because modern philosopher tends to be much clearer and explicit than older philosophy and so people can use it as an example to learn how to think clearly. 2ESRogs2dWas one of the much's in this sentence supposed to be a 'little'? (My guess is that you meant to say that you want orgs to err on the side of being overly cautious rather than being overly reckless, but wanted to double-check.) 2Chris_Leong1dI'd prefer too much rather than too little. [-]ChristianKl1d 29 As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR [...] As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization. That sounds to me like you are saying that people who were talking about demons got marginalized. To me that's not a sign of MIRI/CFAR being culty, but what most people would expect from a group of rationalists. It might have been a wrong decision not to take people who talk about demons more seriously to address their issues, but it doesn't match the error type of what's culty. If I'm misunderstanding what you are saying, can you clarify? Reply [-]Benquo1d 15 There's an important problem here which Jessica described in some detail in a more grounded way than the "demons" frame: As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors ("know-how") and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver's conscious goals. According to IFS-like psychological models, it's common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it's hard to rule out based on physics or psychology. If we're confused about a problem like Friendly AI, it's preparadigmatic & therefore most people trying to talk about it are using words wrong. Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they're confused about, than for simply ignoring the problems. Reply [-]jessicata1d 10 It's not particularly a sign of being "culty" but my main point was that it worked out worse for the people involved, overall, so it doesn't make that much sense to think Leverage did worse overall with their mental health issues and weird metaphysics. I do think that Bayesian virtue, taken to its logical conclusion, would consider these hypotheses to the point of thinking about whether they explain sensory data better than alternative hypotheses, and not reject them because they're badly-formalized and unproven at the start; there is an exploratory stage in generating new theories, where the initial explanations are usually wrong in important places, but can lead to more refined theories over time. Reply [-]Freyja5h 17 It seems like one of the problems with 'the Leverage situation' is that collectively, we don't know how bad it was for people involved. There are many key Leverage figures who don't seem to have gotten involved in these conversations (anonymously or not) or ever spoken publicly or in groups connected to this community about their experience. And, we have evidence that some of them have been hiding their post-Leverage experiences from each other. So I think making the claim that the MIRI/CFAR related experiences were 'worse' because there exists evidence of psychiatric hospitalisation etc is wrong and premature. And also? I'm sort of frustrated that you're repeatedly saying that -right now-, when people are trying to encourage stories from a group of people who we might expect to have felt insecure, paranoid, and gaslit about whether anything bad 'actually happened' to them. Reply 7jessicata4hIt's a guess based on limited information, obviously. I tagged it as in inference. It's not just based on public information, it's also based on having talked with some ex-Leverage people. I don't like that you're considering it really important for ex-Leverage people to say things were "really bad" for them while discouraging me from saying things about how bad my own (and others') experiences were, that's optimizing for a predetermined conclusion in opposition to actually listening to people (which could reveal unexpected information). I'll revise my estimate if I get sufficient evidence in the other direction. [-]AlexMennen1d 26 I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous Does anyone actually believe and/or want to defend this? I have a strong intuition that public-facing discussion of AI timelines within the rationalist and AI alignment communities is highly unlikely to have a non-negligible effect on AI timelines, especially in comparison to the potential benefit it could have for the AI alignment community being better able to reason about something very relevant to the problem they are trying to solve. (Ditto for probably most but not all topics regarding AGI that people interested in AI alignment may be tempted to discuss publicly.) Reply [-]habryka1d 51 I kind of believe this, but it's not a huge effect. I do think that the discussion around short timelines had some effect on the scaling laws research, which I think had some effect on OpenAI going pretty hard on aggressively scaling models, which accelerated progress by a decent amount. My guess is the benefits of public discussion are still worth more, but given our very close proximity to some of the world's best AI labs, I do think the basic mechanism of action here is pretty plausible. Reply [-]Steven Byrnes1d 10 Your comment makes sense to me as a consideration for someone writing on LW in 2017. It doesn't really make sense to me as a consideration for someone writing on LW in 2021. (The horse has left the barn.) Do you agree? Reply [-]habryka1d 35 No, I think the same mechanism of action is still pretty plausible, even in 2021 (attracting more researchers and encouraging more effort to go into blindly-scaling-type research), so I think additional research here could have similar effects. As Gwern has written about extensively, for some reason the vast majority of AI companies are still not taking the scaling hypothesis seriously, so there is lots of room for more AI companies going in on it. I also think there is a broader reference class of "having important ideas about how to build AGI" (of which the scaling hypothesis is one), that due to our proximity to top AI labs, does seem like it could have a decently sized effect. Reply [-]Steven Byrnes1d 12 As in my comment, I think saying "Timelines are short because the path to AGI is (blah blah)" is potentially problematic in a way that saying "Timelines are short" is not problematic. In particular, it's especially problematic (1) If "(blah blah)" an obscure line of research, or (2) if "(blah blah)" is a well-known but not widely-accepted line of research (e.g. scaling hypothesis) AND the post includes new concrete evidence or new good arguments in favor of it. If neither of those is applicable, then I want to say there's really no problem. Like, if some AI Company Leader is not betting on the scaling hypothesis, not after GPT-2, not after GPT-3, not after everything that Gwern and OpenAI etc. have said about the topic ... well, I have a hard time imagining that yet another LW post endorsing the scaling hypothesis would be what tips the balance for them. Reply [-]habryka1d 30 I have updated over the years on how many important people in AI read and follow LessWrong and the associated meme-space. I agree marginal discussion does not make a big difference. I also think overall all discussion still probably didn't make enough of a difference to make it net-negative, but it was substantial enough to cause me to think for quite a while on whether it was worth it overall. I agree with you that the future costs seem marginally lower, but not low enough to make me not think hard and want to encourage others to think hard about the tradeoff. My estimate of the tradeoff came out on the net-positive side, but I wouldn't think it would be crazy for someone's tradeoff to come out on the net-negative side. Reply [-]Zack_M_Davis1d 30 There could be more than one horse. Reply [-]Vaniver1d 20 Does anyone actually believe and/or want to defend this? I believe this. For example, one of my benign beliefs in ~2014 was "songs in frequency space are basically just images; you can probably do interesting things in the music space by just taking off-the-shelf image stuff (like style transfer) and doing it on songs." The first paper doing something similar that I know of came out in 2018. If I had posted about it in 2014, would it have happened sooner? Maybe--I think there's a sort of weird thing going on in the music space where all the people with giant libraries of music want to maintain their relationships with the producers of music, and so there's not much value for them in doing research like this, and so there might be unusually little searching for fruit in that corner of the orchard. But also maybe my idea was bad, or wouldn't really help all of that much, or no one would have done it just because they read it. (I don't think that paper worked in wavelet space, but didn't look too closely.) --------------------------------------------------------------------- I'm much less certain that the net effect is "you shouldn't talk about such things." The more important the consequences of sharing a belief seem to you ("oh, if you just put together X and Y you can build unsafe AGI"), the more important for your models that you're right ("oh, if that doesn't work I think we have five more years"). Reply 8Steven Byrnes1dIt's possible for someone to believe "Timelines are short because the path to AGI is (blah blah)", in which case they might hesitate to publicly justify their timelines, and this might indirectly bleed into a hesitation to bring it up in the first place. I agree that merely stating a belief about timelines publicly on LW, per se, seems pretty harmless right now, unless there's something I'm not thinking of. (Update: if you're a famous AI person or politician publishing a high-profile op-ed that it's feasible for a focused project to make AGI today, that would be a bit different, that would require some thought about whether you're contributing to a worldwide competitive sprint to AGI. But a LW post today wouldn't move the needle on that, I think.) 2AlexMennen1dThis requires a high degree of precision about your knowledge of the path to AGI, which makes it seem not that plausible, unless timelines are very short no matter what you say because others will stumble their way through the path you've identified soon anyway. [-]bn221d 25 None of the arguments in this post seem as if they actually indict anything about MIRI or CFAR. The first claim of CFAR/MIRI somehow motivating 4 suicides provides no evidence that CFAR is unique in this regard or conducive to this kind of outcome and seems like a bizarre framing of events considering that stories about things like someone committing suicide out of suspicion over the post office's nefarious agenda generally aren't seen as an issue on the part of the postal service. Additionally the focus on Roko's Basilisk-esque "info hazards" as a part of MIRI/CFAR reduces the credibility of this point seeing as the original basilisk thought experiment was invented as a criticism of SIAI and according to every LDT the basilisk has no incentive to actually carry out any threats. The second part is even weaker with how it essentially posits a non-argument for how the formation of a conspiracy mindset would be a foreseeable hazard from one's coworkers disagreeing with them on something important for possibly malevolent reasons and there being secrecy in a workplace. The point about how someone other than CFAR calling the police on CFAR-opposed people who were doing something illegal t... (read more) Reply [-]cousin_it1d 24 Maybe at Google or some other corporation you'd have a more pleasant time, because many employees view it as "just putting food on the table", which stabilizes things. It has some bureaucratic and Machiavellian stuff for sure, but to me it feels less psychologically pressuring than having everything be about the mission all the time. Just for disclosure, I was a MIRI research associate for a short time, long ago, remotely, and the experience mostly just passed me by. I only remember lots of email threads about AI strategy, nothing about psychology. There was some talk about having secret research, but when joining I said that I wouldn't work on anything secret, so all my math / decision theory stuff is public on LW. Reply [-]adamzerner1d 22 Some of the mental health issues seem like they might be due to individual people not acting as appropriately as they should, but a lot of it seems to me to be due to the inherent stresses of trying to save the world. And if this is indeed the case, then we should probably have some sort of system in place, or training, to prepare people for these psychological stresses before they dive in. I started musing on this idea earlier in Preparing For Ambition. In that post I focused on my anxiety as a startup founder, but I think it applies to various fields. For example, recently I came across the following excerpt from My Emotions as CEO: I felt lonely every day - maybe not constantly, but definitely every day for 9+ years. I haven't talked to a CEO who didn't feel extreme loneliness. For the first time in my life I didn't feel like I could be friends, even work friends, with anyone else on the team. That might have been my own baggage or a consequence of struggling to bring my whole self to work. The loneliness driver I've heard of most from other CEOs is the inability to talk with people about the emotional rollercoaster that's inherent to the role. It seems like in being a CEO, the... (read more) Reply [-]Nisan1d 21 The psychotic break you describe sounds very scary and unpleasant, and I'm sorry you experienced that. Reply [-]Charlie Steiner2d 15 Thanks. This puts the social dynamics at play in a different light for me - or rather it takes things I had heard about but not understood and puts them in any kind of light at all. I am liking the AI Insights writeup so far. I feel a strong sympathy for people who think they are better philosophers than Kant. Reply [-]Vanessa Kosoy2d 11 everything I knew about how to be hired would point towards having little mental resistance to organizational narratives Can you elaborate a little on this? Reply [-]jessicata1d 10 At university, for example, you'll generally get a better grade if you let the narrative you're being told be the basic structure of your thinking, even if you have specific disagreements in places that you have specific evidence for. In Rao's terminology, people who are Clueless are hired for, in an important sense, actually believing the organizational level at some level (even if there is some amount of double-think), and being manipulable by others around them who are maintaining the simulation. If I showed too much disagreement with the narrative without high ability to explain myself in terms of the existing narrative, it would probably have seemed less desirable to hire me. Reply 2Vanessa Kosoy1dI'm not sure whether you're talking about hiring in most organizations or hiring in MIRI in particular? 3jessicata1dIt applies to most organizations including MIRI. There are some differences in the MIRI case like the ideology being more altruistic-focused and ambitious, and also more plausible in a lot of ways. [-]Scott Garrabrant17h 49 It seems to me like MIRI hiring, especially researchers in 2015-2017, but also in general, reliably produced hires with a certain philosophical stance (i.e. people who like UDASSA, TDT, etc.) and people with a certain kind of mathematical taste (i.e. people who like reflective oracles, Lob, haskell, etc.). I think that it selects pretty strongly for the above properties, and doesn't have much room for "little mental resistance to organizational narratives" (beyond any natural correlations). I think there is also some selection on trustworthiness (e.g. following through with commitments) that is not as strong as the above selection, and that trustworthiness is correlated with altruism (and the above philosophical stance). I think that altruism, ambition, timelines, agreement about the strategic landscape, agreement about probability of doom, little mental resistance to organizational narratives, etc. are/were basically rounding errors compared to selection on philosophical competence, and thus, by proxy, philosophical agreement (specifically a kind of philosophical agreement that things like agreement about timelines is not a good proxy for). (Later on, there was probably mo... (read more) Reply 8Benquo4hThis isn't a full answer, but I suspect you believe this largely because you don't know what someone as smart as you who doesn't have "little mental resistance to organizational narratives" looks like, because mostly you haven't met them. They kind of look like very smart crazy people. [-]Scott Garrabrant3h 16 Hmm, so this seems plausible, but in which case, it seems like the base rate for "little mental resistance to organizational narratives" is very low, and the story should not be "Hired people probably have little mental resistance because they were hired" but should instead be "Hired probably have little mental resistance because basically everyone has little mental resistance." (these are explanatory uses of "because", not a causal uses.) This second story seems like it could be either very true or very false, for different values of "little", so it doesn't seems like it has a truth value until we operationalize "little." Even beyond the base rates, it seems likely that a potential hire could be dismissed because they seem crazy, including at MIRI, but I would predict that MIRI is pretty far on the "willing to hire very smart crazy people" end of the spectrum. Reply [-]4thWayWastrel1d 7 Agree or disagree: "There may be a pattern wherein rationalist types form an insular group to create and apply novel theories of cognition to themselves, and it gets really weird and intense leading to a rash of psychological breaks." Reply 1Viliam7hIs "rationalist types" an euphemism for aspergers? In that case, "aspergers creating a new theory of cognition, applying it on themselves, and only getting feedback from other aspergers studying the same theory" sounds like something that could easily spiral out of control. [-]philip_b2d 6 I know there are serious problems at other EA organizations, which produce largely fake research (and probably took in people who wanted to do real research, who become convinced by their experience to do fake research instead), although I don't know the specifics as well. EAs generally think that the vast majority of charities are doing low-value and/or fake work. Do I understand correctly that here by "fake" you mean low-value or only pretending to be aimed at solving the most important problems of the humanity, rather than actual falsifications going on, publishing false data, that kind of thing? Reply 6jessicata1dI mean pretending to be aimed at solving the most important problems, and also creating organizational incentives for actual bias in the data. For example, I heard from someone at GiveWell that, when they created a report saying that a certain intervention had (small) health downsides as well as upsides, their supervisor said that the fact that these downsides were investigated at all (even if they were small) decreased the chance of the intervention being approved, which creates an obvious incentive for not investigating downsides. There's also a divergence between GiveWell's internal analysis and their more external presentation and marketing; for example, while SCI is and was listed as a global health charity, GiveWell's analysis found that, while there was a measurable positive effect on income, there wasn't one on health metrics. 7jefftk1dThat doesn't sound right? They say there is strong evidence that deworming kills the parasites, and weaker evidence that it both improves short term health and leads to higher incomes later in life. But in as much as it improves income, it pretty much has to be doing that via making health better: there isn't really any other plausible path from deworming to higher income. https://www.givewell.org/ international/technical/programs/deworming [https://www.givewell.org/ international/technical/programs/deworming] 8Benquo1dI'd expect that to show up in some long-run health metrics if that were the mechanism, though. One way this could be net neutral is that it helps kids with worms but hurts kids without worms. They don't test for high parasitic load before administering these pills, they give them to all the kids (using coercive methods). But also, killing foreign creatures living in the body is often bad for health. This is a surprising fact - on first principles I'd have predicted that mass administration of antibiotics would improve health by killing off gut bacteria- but this seems not to be generically true, and sometimes we even suffer from the missing gut bugs. (E.g. probiotics, and more directly relevant, helminth therapy.) 8jefftk1dGiveWell discusses this here: https://www.givewell.org/ international/technical/programs/deworming [https://www.givewell.org/ international/technical/programs/deworming] Summary: * ~0.1kg weight increase * Unusably noisy data on hemoglobin levels * No effect on height While other metrics might show a change, if collected carefully, I think all we know at this point is that no one has done that research? Which is very different from saying that we do know that there is no effect on health? [-]Benquo1d 10 While other metrics might show a change, if collected carefully, I think all we know at this point is that no one has done that research? Which is very different from saying that we do know that there is no effect on health? Neither Jessica nor I said there was no effect on health. It seems like maybe we agree that there was no clearly significant, actually measured effect on long-run health. And GiveWell's marketing presents its recommendations as reflecting a justified high level of epistemic confidence in the benefit claims of its top charities. We know that people have looked for long-run effects on health and failed to find anything more significant than the levels that routinely fail replication. With an income effect that huge attributable to health I'd expect a huge, p<.001 improvement in some metric like reaction times or fertility or reduction the incidence of some well-defined easy-to-measure malnutrition-related disease. Worth noting that antibiotics (in a similar epistemic reference class to dewormers for reasons I mentioned above) are used to fatten livestock, so we should end up with some combination of: * Skepticism of weight gain as evidence of benefit. * Increased crede ... (read more) Reply [-]jefftk1d 14 Neither Jessica nor I said there was no effect on health I had read "GiveWell's analysis found that, while there was a measurable positive effect on income, there wasn't one on health metrics" as "there was an effect on income that was measurable and positive, but there wasn't an effect on health metrics". Rereading, I think that's probably not what Jessica meant, though? Sorry! Reply 9jessicata1dYeah, I meant there wasn't a measureable positive health effect. [-]mukashi20h 5 As someone who is pretty much an outsider to this community, I think it is interesting that a major drive for many people in this community seems to be tackling the most important problems in the world. I am not saying is a bad thing, I am just surprised. In my case, I work in academia not so much because of the impact I can have working here, but mainly because it allows me to have a more balanced life with a flexible time schedule. Reply [-]Kenny 4h 4 I don't think psychedelics really do much for most people. I think for those who say they have been fundamentally altered by them most likely have a construed notion/prior before getting into the whole spiel. It's just a means to an end to them. Them thinking that psychedelics would change you fundamentally made them easier to give into the notion that they've fundamentally changed as a result of taking psychedelics rather than the psychedelics being part of the entire psychological journey they are going through, regardless of whether psychedelics were in... (read more) Reply [-]Viljami Virolainen3h 14 You seem to be claiming that without somebody giving you suggestions, people would not think of psychedelic trips as something special. Well, as the discoverer of the substance, Hoffman surely did not have any preconceptions, since the first time he was exposed to LSD it was an accident, and had no idea of it's psychedelic properties. His account is freely available online here: https:// www.hallucinogens.org/hofmann/child1.htm A quote where he describes the second exposure, which was intentional experiment: "This self-experiment showed that LSD-25 behaved as a psychoactive substance with extraordinary properties and potency. There was to my knowledge no other known substance that evoked such profound psychic effects in such extremely low doses, that caused such dramatic changes in human consciousness and our experience of the inner and outer world." Reply 3Kenny 2hThat is exactly what I said in another comment about changing your state of mind and nothing else. Suggestions are outside of that change of state of mind. You seemed to be confused about mixing the effects of psychedelics and voodoo/woo/spiritual stuff. I know psychedelics being viewed as something related to spirituality is rather a popular rhetoric among both users and nonusers. The spirituality is what I mean by suggestion. You are suggesting something that has nothing to do with the mechanism of action of the drug. [-]James_Miller5h 2 As a college professor who has followed from physically afar the rationality community from the beginning, here are my suggestions: 1. Illegal drugs are, on average, very bad. How about a policy that if you use illegal drugs you are presumptively considered not yet good enough to be in the community? 2. My college has a system of deans that help students deal with all kinds of problems. Every student has the same dean for all her time at my school, so the dean gets to know the student. Perhaps trusted people could become rationality deans a ... (read more) Reply 5ozziegooen2hThanks for the opinion, and I find the take interesting. I'm not a fan of the line, "How about a policy that if you use illegal drugs you are presumptively considered not yet good enough to be in the community?", in large part because of the phrase "not yet good enough". This is a really thorny topic that seems to have several assumptions baked into it that I'm uncomfortable with. I also think that many here like at least some drugs that are "technically illegal", in part, because the FDA/federal rules move slowly. Different issue though. I like points 2 and 3, I imagine if you had a post just with those two it would have gotten way more upvotes. 2James_Miller7mThanks for the positive comment on (2) and (3) and I probably should have written them in a separate comment from (1). While I'm far from an expert on drugs or the California rationalist community, the comments on this post seem to scream "huge drug problem." I hope leaders in the community at least consider evaluating the drug situation in the community. I agree with you about the FDA. 5Avi4hIn refernce to point 1, how would you define 'illegal drugs' (as defined by which country/state)? My understanding is that if you applied that rule (people that have used or currently use 'illegal drugs' are not 'good enough' to be in the community) it would rule out at least ~90% of the humans I've ever interacted with. -5James_Miller4h [-]agrippa1d 2 Thank you SO MUCH for writing this. The case Zoe recounts of someone "having a psychotic break" sounds tame relative to what I'm familiar with. Someone can mentally explore strange metaphysics, e.g. a different relation to time or God, in a supportive social environment where people can offer them informational and material assistance, and help reality-check their ideas. I think this is so well put and important. I think that your fear of extreme rebuke from publishing this stuff is obviously reasonable when dealing with a group that believes itse... (read more) Reply 1ChristianKl13hIs a sign of most cults that they have a clear interior/exterior distinction. Whether or not someone is a scientologist is for example very clear. The fact that CFAR doesn't have that is an indication against it being a cult. [-]seed2d 1 Scott Aaronson, for example, blogs about "blank faced" non-self-explaining authoritarian bureaucrats being a constant problem in academia. Venkatesh Rao writes about the corporate world, and the picture presented is one of a simulation constantly maintained thorough improv. Well, I once met a person in academia who was convinced she'd be utterly bored anywhere outside academia. If you want an unbiased perspective on what life is like outside the rationality community, you should talk to people not associated with the rationality community. (Yes, ... (read more) Reply [+]CellBioGuy1d -32 [+]TAG2d -40 [+][comment deleted]2d 30 Deleted by jdp, Yesterday at 7:00 AM Reason: Comment deleted by its author.