https://statmodeling.stat.columbia.edu/2021/09/15/the-bayesian-cringe/ Skip to primary content Statistical Modeling, Causal Inference, and Social Science Search [ ] [Search] Main menu * Home * Authors * Blogs We Read * Sponsors Post navigation "As you all know, first prize is a Cadillac El Dorado. Anyone wanna see second prize? Second prize is you're governor of California. Third prize is you're fired." BREAKING: Benford's law violations in California. Hollywood TV and movie franchises got some splainin to do! The Bayesian cringe Posted on September 15, 2021 9:45 AM by Andrew I used this expression the other day in Lauren's seminar and she told me she'd never heard it before, which surprised me because I feel like I've been saying it for awhile, so I googled *statmodeling bayesian cringe* but nothing showed up! So I guess I should wrote it up. Eventually everything makes its way from conversation to blog to publication. For example, the earliest appearance I can find of "Cantor's corner" is here, but I'd been using that phrase for awhile before then, and it ultimately appeared in print (using the original Ascii art!) in this article in a physics journal. So . . . the Bayesian cringe is this attitude that many Bayesian statisticians, including me, have had, in which we're embarrassed to use prior information. We bend over backward to assure people that we're estimating all our hyperparameters from the data alone, we say that Bayesian statistics is the quantification of uncertainty, and we don't talk much about priors at all except as a mathematical construct. The priors we use are typically structural--not in the sense of "structural equation models," but in the sense that the priors encode structure about the model rather than particular numerical values. An example is the 8 schools model--actually, everything in chapter 5 of BDA--where we use improper priors on hyperparameters and never assign any numerical or substantive prior information. The Bayesian cringe comes from the attitude that non-Bayesian methods are the default and that we should only use Bayesian approaches when we have very good reasons--and even that isn't considered enough sometimes, as discussed in section 3 of this article. So that's led us to emphasize innocuous aspects of Bayesian inference. Now, don't get me wrong, I think there are virtues to flat-prior Bayesian inference too. Not always--sometimes the maximum likelihood estimate is better, as in some multidimensional problems where the flat prior is actually very strong (see section 3 of this article) or just because it's a mistake to take a posterior distribution too seriously if it comes from an unrealistic prior (see section 3 here)--but for the reasons given in BDA, I typically think that flat-prior Bayes is a step forward. But I keep coming across problems where a little prior information really helps--see for example Section 5.9 here, it's one of my favorite examples--and more and more I've been thinking that it makes sense to just start with a strong prior. Instead of starting with the default flat or super-weak prior, start with a strong prior (as here) and then retreat if you have prior information saying that this prior is too strong, that effects really could be huge or whatever. As the years have gone on, I've become more and more sympathetic with the attitude of Dennis Lindley. As a student I'd read his contributions to discussions in statistics journals and thing, jeez what an extremist, he just says the same damn thing over and over. But now I'm like, yeah, informative priors, cut the crap, let's go baby. As I wrote in 2009, I suspect I'd agree with Lindley on just about any issue of statistical theory and practice. I've read some of Lindley's old articles and contributions to discussions and, even when he seemed like something of an extremist at the time, in retrospect he always seems to be correct. One way we've moved away from the Bayesian cringe is by using the terminology of regularization. Remember how I said that lasso (and, more recently, deep nets) have made the world safe for regularization? And how I said that Bayesian inference is not radical but conservative (sorry, Lindley)? When we talk about regularization, we're saying that this kind of partial-pooling-toward-the-prior is desirable in itself. Rather than being a regrettable concession to bias that we accept in order to control our mean squared error, we argue that stability is a goal in itself. (Conservative, you see?) We're not completely over the Bayesian cringe--look at just about any regression published in a political science journal, and within econometrics there are still some old-school firebreathers of the anti-Bayesian type--but I think we're gradually moving toward a general idea that it's best to use all available information, with informative priors being one way to induce stability and thus allow us to fit more complicated, realistic, and better-predicting models. This entry was posted in Bayesian Statistics, Sociology, Zombies by Andrew. Bookmark the permalink. 21 thoughts on "The Bayesian cringe" 1. [2daed445]Keith O'Rourke on September 15, 2021 10:43 AM at 10:43 am said: In statistics in general, there seems have been a long term attitude that assumptions are just a necessary price to pay to do the analysis, develop methods and admittedly launder a lot of model uncertainty. For a scientific perspective assumptions should be recognized as purposeful attempts to represent aspects of reality or supplement such representations (i.e. priors for nuisance parameters). This includes checking the prior and data generating assumptions as a way to make them less wrong (better connected with reality) and evaluate critical roles of each. Reminds of Freud admitting that psychology was not yet scientific, but he did not want to change careers. This, I can't justify the priors I am using but I want to do a Bayesian analysis anyway. Or as someone once put, pull the Bayesian crank and claim to have solved all the analysis problems. On the other hand there is a simple mitigation. If one is not sure of justifications of the prior, in most cases, a frequency calibration can be carried out using simulation. This discloses any possible increases in frequency risks that one might be taking by not doing a frequency analysis. Often that will be acceptable. Refusing to do that as one feels frequency properties make no sense, is like someone refusing to disclose possible conflict of interest as they don't think it is conflict. Let the end users (collaborators, audience, etc.) evaluate this on their own. Maybe this refusal should be called "Bayesian ostrich-ing" Reply | + [37cf]Kyle C on September 15, 2021 10:55 AM at 10:55 am said: I might have said Bayesian Crouch, as this is not quite what The Kids Today mean by "cringe." Reply | 2. [7b854e48]Garnett on September 15, 2021 10:51 AM at 10:51 am said: The Cringe will persist as long as significance testing is the dominant research paradigm. If we don't care about significance testing, then priors become interesting topics of legitimate, reasonable discussion. Reply | 3. [03b26d04]Ken McAlinn on September 15, 2021 10:52 AM at 10:52 am said: One of his best writings are in the Valencia Proceedings (incl. "Is our View of Bayesian Statistics Too Narrow?"). Alas they are near impossible to find these days (and the copies on Amazon charge you an arm and a leg). Reply | 4. [087d4b74]Christian Hennig on September 15, 2021 11:43 AM at 11:43 am said: The posterior inherits meaning from the prior. If the prior has no meaning, neither has the posterior. As a pluralist with some frequentist leanings, I like to see Bayesian analyses with a well motivated prior where the authors manage to convince me that the prior is used to involve some useful information that otherwise would have been ignored. Sadly the majority of Bayesian papers that I see don't bother to motivate the prior in any depth (sometimes convenience is used as the only argument, which is better than not having any, as I also often see; what I also often see is that prior and hyperprior involve a number of choices and out of say six choices one is motivated from some prior information), which doesn't make me feel that anything is won by doing this the Bayesian way. De Finetti by the way wrote (from my memory, which isn't all too reliable) that there always is some information, and one should really make the effort to bring it out, therefore he isn't really interested in "informationless priors". Reply | 5. [dcd5fdc5]Jordan on September 15, 2021 11:52 AM at 11:52 am said: Jacob Feldman talks about this topic in his paper "Tuning your Priors to the World" (https://pubmed.ncbi.nlm.nih.gov/23335572/) From the abstract: "Whenever there is uncertainty about the environment--which there almost always is--an agent's prior should be biased away from ecological relative frequencies and toward simpler and more entropic priors." I wish Feldman would write more about this stuff! Reply | + [145b]Shravan Vasishth on September 18, 2021 5:30 AM at 5:30 am said: Wow, that is a very roundabout and verbose article that basically seems to be saying that one should use regularizing priors. Yeah sure, every practicing Bayesian knows that, no? Maybe I am missing something here. Reply | 6. []Frequentist on September 15, 2021 2:58 PM at 2:58 pm said: I only cringe when Bayesians claim sovereignty over the idea of quantifying uncertainty (present company excluded, of course!). Some frequentists care a lot about uncertainty and are perfectly capable of quantifying it rigorously. Reply | + [8ee6]Jordan on September 15, 2021 4:14 PM at 4:14 pm said: Hmm...how does a frequentist quantify uncertainty conditional on the data? Reply | o []Frequentist on September 15, 2021 5:29 PM at 5:29 pm said: What do you mean by "conditional on the data"? Resampling is one way to gauge uncertainty about point estimates. There's also likelihood profiling. Reply | + [38ca]Jack Gallagher on September 15, 2021 4:30 PM at 4:30 pm said: Capable sure, but rarely willing when it comes to teaching statistics, in my experience. Perhaps the immense damage NHST has wrecked on the integrity of statistical analysis is ultimately Fisher's fault, but every frequentist STAT 101/201 /301 prof who continues to teach the self-contradictory, unphysical NHST paradigm to trusting social & physical science undergraduates deserves the snootiness from Bayesians-the metaphorical blood of the replication crisis is ultimately on their (and Fisher's and Feller's and ...) hands. Reply | 7. [cbf7434d]Carlos Ungil on September 15, 2021 4:59 PM at 4:59 pm said: > I suspect I'd agree with Lindley on just about any issue of statistical theory and practice I think he was somewhat less sympathetic to the notion of "empirical Bayes". [there is no one less Bayesian than an empirical Bayesian] Reply | + [e0c5]Howard Edwards on September 15, 2021 5:32 PM at 5:32 pm said: Dennis Linley visited our department in 1978 while I was a PhD student there. My PhD supervisor John Deely had published in the Empirical literature and had challenged Lindley to explain he reasons behind his statement "there is no one less Bayesian than an empirical Bayesian". he result was their joint article in JASA "Bayes Empirical Bayes" (December 1981 avaiable on JSTOR). Reply | o []Anonymous on September 15, 2021 5:33 PM at 5:33 pm said: Sorry for the misspelling and typo - "Dennis Lindley visited our department in 1978 while I was a PhD student there. My PhD supervisor John Deely had published in the Empirical Bayes literature ...." Reply | 8. [4f200a1e]Joseph Maher on September 15, 2021 4:59 PM at 4:59 pm said: Somewhat off topic, but does anyone know a reasonably natural example of an iterated Simpson's paradox, where you can divide once to reverse the correlation, and then divide again to get back the original one? Reply | 9. [71d3fdf5]David Marcus on September 16, 2021 8:50 AM at 8:50 am said: When most people are wrong, it is the "extremists" who probably are right. I always thought Lindley was right. Reply | 10. [2daed445]Keith O'Rourke on September 16, 2021 9:34 AM at 9:34 am said: > Lindley was right. Well this was his last talk - https://xianblog.wordpress.com/2013 /08/20/dennis-lindley-and-tony-ohagan-on-youtube/ Now to me, an axiom system that he envisions for statistics is just deduction and although deduction is part of statistics (trying to learn about the empirical world) most of statistics has to be induction. That is the reason why the premises (prior, data generating model and data) need to be critically assess/checked and performance of miss-specified models (premises) some how assessed which motivated my comment above - https:// statmodeling.stat.columbia.edu/2021/09/15/the-bayesian-cringe/# comment-2023368 Reply | + [2368]Silk on September 16, 2021 3:21 PM at 3:21 pm said: >most of statistics has to be induction. Premises are inherently deductive if you consider the body of research that came before. The interpretation of models, however, is inductive and if one finds out that the premises don't hold then this might also call for inductive reasoning. I can't see any contradiction. Reply | o [2dae]Keith O'Rourke on September 17, 2021 8:48 AM at 8:48 am said: Deduction just involves the discernment of what is contained in the premises - their implications. Not all sure what mean by "Premises are inherently deductive". Perhaps the body of research that came before is abstracted into a model using abduction and induction? Then we are back to deduction from that (always wrong) model... Reply | 11. [145b6af8]Shravan Vasishth on September 18, 2021 5:24 AM at 5:24 am said: In almost all the work that we do nowadays in my lab, we just "buy the entire market": we always do a sensitivity analysis, using a range of priors going from mildly (un)informative to informative. It seems reasonable to report a range of posteriors under different priors. Usually, the target parameter is rock steady under different priors. In one case only, I have fit models with three radically different priors, following a methodology spelt out in one of Spiegelhalter's books: an agnostic prior, an enthusiastic prior (informative-based on my own model's a priori predictions), and an adversarial (the opponent's informative) prior. The posterior looks very different under the enthusiastic and adversarial priors (which are the interesting cases). What I like is that I can formally encapsulate the scientific disagreement within the statistical models themselves. I will never convince through data that my scientific opponents that their belief is wrong, because their priors are so tight. The data don't really matter much if you already think you know the truth, which is a common disease in linguistics. I have come to the conclusion that Chomsky and his acolytes are actually right to ignore experimental data and just rely on intuition--what linguists do is prior self-elicitation, and then base their arguments on their own prior predictive distributions of the "facts". In linguistics, what happens most often is that the data only end up sullying the priors. Reply | 12. [f303eaec]Bob76 on September 28, 2021 12:24 PM at 12:24 pm said: I think I understand the "Bayesian Cringe." However, I do not and never will suffer from it. The reason that I won't relates to how I became a self-aware Bayesian (everybody is really a Bayesian for important decisions in their own lives--whether they know it or not). I was exposed to Bayesian inference by two independent sources. First, as those have seen my earlier posts probably know, I am a communications engineer. For decades communications engineering has used tools based on Bayesian concepts to build systems. For example, literally billions of devices have been built (cellphones, digital TVs, CD/DVD players) that include a Bayesian processor. Specifically, the Viterbi Algorithm--often characterized as an example of a Bayesian belief propagation network--has been implemented in hardware billions of times. Understanding and using the Viterbi Algorithm does not require one to be a Bayesian--but it does require one to accept Bayesian reasoning at each step along the trellis. There are many other examples of the use of Bayesian reasoning in communications engineering. Cell phones work. Bayesian inference works. And, in my experience, using Bayesian reasoning and terminology in the communications engineering workplace is unexceptional. If you look at one of the earliest textbooks setting forth the statistical theory of communications engineering (Davenport and Root, 1958), you will find an exposition of the Bayesian approach to hypothesis testing together with cost functions for the types of errors. In summarizing the Bayesian approach, the authors state "There is often difficulty in applying it [Bayesian tests] first because of difficult in assigning losses, second because of difficulty of assigning a priori probabilities." Later, in discussing hypothesis testing, the authors state "The likelihood principle can be untrustworthy in certain applications. Obviously, if the observer uses a likelihood-ratio test when there is actually a probability distribution on O of which he is unaware, he may get a test which differs radically from the test for minimum probability of error." That's an endorsement of the Bayesian approach. My second, completely independent, exposure to Bayesian reasoning was through the statistical decision theory of Pratt, Raiffa, and Schlaifer. I was exposed to it via some social contacts and I also took a course on it from a professor who I think was one of Raiffa's PhD advisees. In the decision making world, Bayesian reasoning seems somewhere between natural and unassailably necessary. I also took a course in frequentist statistics. It never really took. I should point out that the incorporation of the methods of frequentist statistics into communications engineering, which occurred in the 1940s and 50s, advanced the art and proved to be extremely useful. My point is that my exposure to formal Bayesian methods was in contexts where it seems to me that they were/are incontrovertibly correct. The most compelling criticism of Bayesian statistics that I recall (from many year ago) was that it was too hard--too computationally difficult--to apply for everyday statistical problems. Well, with the techniques and computers of the 1970s it was too difficult. It appears to me (from reading this blog) that is no longer the case. So, a "Bayesian Cringe" is justified only in those instances where one knows that the Bayesian tools have a significant probability of not working. Bob76 PS. Maybe there is something in the water at Berkeley that induces a distaste for Bayesian thought. Reply | Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * [ ] [ ] [ ] [ ] [ ] [ ] [ ] Comment * [ ] Name [ ] Email [ ] Website [ ] [Post Comment] [ ] [ ] [ ] [ ] [ ] [ ] [ ] D[ ] * Art * Bayesian Statistics * Causal Inference * Decision Analysis * Economics * Jobs * Literature * Miscellaneous Science * Miscellaneous Statistics * Multilevel Modeling * Papers * Political Science * Public Health * Sociology * Sports * Stan * Statistical Computing * Statistical Graphics * Teaching * Zombies 1. Andrew on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 5:52 PM Michael: Yeah, also disappointing in your case that the journal editors didn't step in and add some sanity to the... 2. Michael Weisman on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 5:52 PM Andrew- Thanks. It's par for the course. It wasn't all that much work. The disappointment is that the journal now... 3. Michael Weissman on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 5:39 PM Interesting post, with the usual causal inference issues mixed in with group treatment level etc. The paper we were commenting... 4. Andrew on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 5:34 PM Michael: Interesting. In your comment, you recommended the data be analyzed to account for clustering. In their response, they write,... 5. Daniel Lakeland on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 5:16 PM David, Shravan... may I suggest moving this conversation to Mastodon? Not because I think it's inappropriate here but just because... 6. Daniel Lakeland on Abraham Lincoln and confidence intervalsMarch 26, 2024 5:07 PM The rationals are "dense" in the reals meaning that you can get as close to any real number as you'd... 7. Michael Weissman on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 5:05 PM Their Response, which includes links to our Comment: https:// journals.aps.org/prper/abstract/10.1103/ PhysRevPhysEducRes.20.018002 They do respond to one of our points, about enhanced social... 8. Andrew on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 4:53 PM Michael, Wow---do you have a reference for this one? Also, it's funny for them to criticize a statistical critique as... 9. Michael Weissman on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 4:36 PM Here's another, more general-purpose one, from a Response to a Comment Jamie Robins and I published a couple of days... 10. Michael Weissman on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 4:27 PM Yes! In my wife's huge intro stats classes she offered bonus points to anyone who caught her mistakes. Oddly, none... 11. Kyle C on Philip K. Dick's character namesMarch 26, 2024 4:23 PM Horselover Fat is actually just Philip Dick translated to English. Which is important for plot reasons in the book. 12. Marc on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 4:04 PM Excellent post. Please add gaslighting. I was troubled by a private email exchange with a leading scientist where the answer... 13. Anoneuoid on Abraham Lincoln and confidence intervalsMarch 26, 2024 4:01 PM Daniel (or anyone with a strong math background), I have a question. Is it correct to say that there are... 14. Anoneuoid on "Whistleblowers always get punished"March 26, 2024 3:54 PM If you keep giving them money for interventions with NNT 10-100, then that is what you will get. Expensive interventions... 15. Andrew Richmond on Philip K. Dick's character namesMarch 26, 2024 2:02 PM There's a classic paper in metaphysics, by D.C. Williams, called "On the Elements of Being." The paper is about the... 16. Joshua on Why are all these school cheating scandals happening? March 26, 2024 12:58 PM Thanassi - Thanks for the response but I we're mostly retreading the same ground. I am not defending reflexive resistance... 17. Anonymous Soccer Fan on The contrapositive of "Politics and the English Language." One reason writing is hard:March 26, 2024 12:06 PM Joshua. I think it's important because if you're watching the game and you hear it or lip read it you... 18. Wonks Anonymous on The contrapositive of "Politics and the English Language." One reason writing is hard:March 26, 2024 10:47 AM Hard cognitive tasks are particularly hard for people who aren't so bright. I think most teachers know that students who... 19. JimV on Philip K. Dick's character namesMarch 26, 2024 10:29 AM I still have some 35-cent Ace paperbacks, such as "The Three Stigmata of Palmer Eldritch", "Galactic Pot Healer", and so... 20. paul alper on The contrapositive of "Politics and the English Language." One reason writing is hard:March 26, 2024 10:22 AM Lest we forget, writing is not the only endeavor which is hard: https://www.nytimes.com/2018/10/17/style/astrology-exam.html "Though technically open book, the ISAR CAP... 21. Alex C. on Before reading this post, take a cold shower: A Stanford professor says it's "great training for the mind"!March 26, 2024 9:51 AM Christopher is apparently referring to this article that came out recently in "New York" magazine. I had a bad feeling... 22. jd on Philip K. Dick's character namesMarch 26, 2024 9:17 AM https://en.wikipedia.org/wiki/List_of_Dickensian_characters 23. Alex C. on Philip K. Dick's character namesMarch 26, 2024 9:13 AM I always wondered by Nabokov chose "Humbert Humbert" as the name of his most famous character. That's such a strange... 24. Chris Mebane on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 8:53 AM Re " 1. The corrections do not affect the main results of the paper." I had the publisher (Wiley) take... 25. Andrew on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 7:22 AM Anon: I've seen this in both directions. As you say, sometimes an author will use this tactic to shake off... 26. Anonymous on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 26, 2024 7:14 AM Sometimes, authors do not share their arguments with the reviewer but go behind their back and send them directly to... 27. Jason Bosch on The contrapositive of "Politics and the English Language." One reason writing is hard:March 26, 2024 5:44 AM Joshua: I think it is important. By merely describing it as homophobic, the writer is forcing their interpretation on you... 28. John Carlin on "On the uses and abuses of regression models: a call for reform of statistical practice and teaching": We'd appreciate your comments . . .March 26, 2024 3:21 AM Oops, sorry forgot to self-identify! That anonymous was me... 29. Pedro Franco on The contrapositive of "Politics and the English Language." One reason writing is hard:March 26, 2024 3:13 AM I really want to write to you more in this Andrew, hopefully soon plus a couple other (hopefully) interesting tidbits.... 30. chipmunk on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 10:18 PM Dmitri: it's terrible the way Trump references violence! But please understand, like many Americans he was subjected for years to... 31. chipmunk on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 10:09 PM Andrew: In one sence I agree writing is hard. But not because *writing* is hard. What's hard is pleasing everyone... 32. Andrew on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 8:59 PM Chipmunk: Sure, lots of people are liars, and lots of other people are acting as politicians when they write, shading... 33. chipmunk on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 8:47 PM Andrew: is the problem that "writing is hard" or is it that many humans either flat out liars or, well,... 34. Christopher on Before reading this post, take a cold shower: A Stanford professor says it's "great training for the mind"!March 25, 2024 7:30 PM Huberman has now invited criticism about his romantic affairs by making strong claims to his overlapping romantic partners. It seems... 35. Anonymous on "On the uses and abuses of regression models: a call for reform of statistical practice and teaching": We'd appreciate your comments . . .March 25, 2024 7:01 PM Thanks Shira! You make a great point about the gap between the asymptotics-based focus of a lot of modern causal... 36. Daniel Lakeland on Hey! Here's a study where all the preregistered analyses yielded null results but it was presented in PNAS as being wholly positive.March 25, 2024 6:11 PM Phil, absolutely, this is speculation and it's driven by a substantial amount of data analysis not mentioned here in the... 37. JimV on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 5:19 PM I have to say, "And any direction you take makes the writing worse in some important way." seems less straight-forward... 38. Joshua on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 4:13 PM Do you think it's important the the particularly slur is identified? If so, what significance do you attach to that... 39. Tom Permutt on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 4:03 PM And when vagueness is the norm, people are afraid to say what they mean because it doesn't sound like real... 40. Raphael K on Hey--let's collect all the stupid things that researchers say in order to deflect legitimate criticismMarch 25, 2024 4:02 PM The university lecturers I have corrected during lectures have always taken it well when I have corrected them*. One time... 41. Phil on Hey! Here's a study where all the preregistered analyses yielded null results but it was presented in PNAS as being wholly positive.March 25, 2024 3:53 PM If the rate of suicide decreased as gini decreased, I think you could explain that too. I'm tempted to provide... 42. Anonymous Soccer Fan on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 3:53 PM It is surprisingly hard to find a source that will just tell you what the chant actually is... namely "puto"... 43. WEG on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 2:40 PM That's pretty funny - before I went to Twitter and found the link to this article, I was at the... 44. Daniel Lakeland on Hey! Here's a study where all the preregistered analyses yielded null results but it was presented in PNAS as being wholly positive.March 25, 2024 2:30 PM Interestingly, as gini decreases the incidence of suicide increases... this could possibly be related to lack of opportunities to utilize... 45. Daniel Lakeland on Hey! Here's a study where all the preregistered analyses yielded null results but it was presented in PNAS as being wholly positive.March 25, 2024 2:25 PM That was what I thought too. And it's not the take away I have at all. My own personal model... 46. roger koenker on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 1:48 PM Both the Times and Guardian ran stories today about the completion of La Sagrada Familia In Barcelona, and both featured... 47. Phil on Jonathan Bailey vs. Stephen WolframMarch 25, 2024 1:44 PM I wouldn't put it as dramatically as David does, but I agree that the issue of whether readers have been... 48. Andrew on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 1:11 PM Dmitri: This was an issue when Orwell was writing too! Fascists and communists in Europe were using bloody take-no-prisoners-style language,... 49. Kyle C on The contrapositive of "Politics and the English Language." One reason writing is hard:March 25, 2024 12:59 PM In that respect it is a huge problem at places like the New York Times that are still using euphemisms... 50. Bob Carpenter on Jonathan Bailey vs. Stephen WolframMarch 25, 2024 12:46 PM That sounds like an indictment of the methodology in both linguistics and psycholinguistics! But then I don't see how that's... Proudly powered by WordPress