[HN Gopher] Playing in the Creek
       ___________________________________________________________________
        
       Playing in the Creek
        
       Author : c1ccccc1
       Score  : 323 points
       Date   : 2025-04-11 05:05 UTC (17 hours ago)
        
 (HTM) web link (www.hgreer.com)
 (TXT) w3m dump (www.hgreer.com)
        
       | profsummergig wrote:
       | Requesting someone to please explain the "coquina" metaphor.
        
         | hecanjog wrote:
         | I think that they're saying a little bit of playing around with
         | replacing thinking and composing with automated tools is
         | recoverable, but at an industrial or societal scale the damage
         | is significant. Like the difference between shoveling away some
         | sand with your hands to bury the small creatures temporarily
         | and actually destroying their habitat by "lobbying city council
         | members to put in a groin or seawall, and seriously move that
         | beach sand."
        
           | profsummergig wrote:
           | I skimmed the Anthropic report and didn't catch the negative
           | effects. Did they mention any? Good on them if they did.
        
             | hecanjog wrote:
             | Yes, they mention a few times the concern that students are
             | offloading critical thinking rather than using the tool for
             | learning.
        
               | Cthulhu_ wrote:
               | I just hope the educational institutions catch on, stick
               | with their principles and don't give them the paperwork.
               | The paper / title should be evidence of students'
               | learning and thinking abilities, not of just their
               | output.
        
         | xmprt wrote:
         | My understanding is that the author is this superior being
         | trying to accomplish a massive task (damming a beach) while
         | knowing that it could cause problems for these clams. In the
         | real world, Anthropic is trying to accomplish a massive task
         | (building AGI) and they're finally starting to notice the
         | potential impacts this has on people.
        
         | jjcob wrote:
         | Coquinas are clams that bury themselves in the sand very close
         | to the surface [1]. The author worries that while they are
         | playing with the sand, they might accidentally bury coquina
         | clams too deep and kill them because they can no longer reach
         | the surface.
         | 
         | Anthropic apparently is starting to notice the possible danger
         | to others of their work. I'm not sure what they are referring
         | to.
         | 
         | [1]: https://www.youtube.com/watch?v=KZUlf7quu3o
        
           | profsummergig wrote:
           | > Anthropic apparently is starting to notice the possible
           | danger to others of their work. I'm not sure what they are
           | referring to.
           | 
           | Are they being vague about the danger? If possible, please
           | link to a communique from them. I've missed it somehow.
           | Thanks.
        
             | vermilingua wrote:
             | https://www.anthropic.com/news/anthropic-education-report-
             | ho...
             | 
             | Discussed here yesterday:
             | https://news.ycombinator.com/item?id=43633383
        
               | profsummergig wrote:
               | Thank you.
        
           | deathanatos wrote:
           | As a child at the beach, I would think noticing the clams
           | would result in attempting to unearth them. Childhood
           | curiosity about why there are bubbles.
           | 
           | Your explanation makes more sense, however.
        
         | ern wrote:
         | Maybe I'm not smart enough, or too tired to decode these
         | metaphors, so I plugged the essay into ChatGPT and got a clear
         | explanation from 4o.
        
           | profsummergig wrote:
           | Ah. Should have thought of that. Going to do that now.
           | Thanks.
        
           | criddell wrote:
           | Are you at all concerned that plugging stuff like this into
           | ChatGPT is leaving you with weaker cognitive muscles? Or is
           | it more similar to what people do when they see a new word
           | and reach for their dictionary?
        
             | adwn wrote:
             | > _Are you at all concerned that plugging stuff like this
             | into ChatGPT is leaving you with weaker cognitive muscles?_
             | 
             | Couldn't this very same argument have been used against
             | _any_ form of mental augmentation, like written language
             | and computers? Or, in an extended interpretation, against
             | any form of physical augmentation, like tool use?
        
               | criddell wrote:
               | You can argue whatever you want to argue.
               | 
               | I make my living with my brain so I do worry about the
               | downsides of removing boredom and mental struggle from my
               | days.
        
               | Workaccount2 wrote:
               | It's almost certainly going to be bad, and almost
               | certainly going to be unavoidable.
               | 
               | I can't spell for shit anymore. Ever since auto correct
               | became omnipresent in pretty much all writing fields, my
               | brain just kinda ditched remembering how to spell words.
               | 
               | buuuttt
               | 
               | Manual labor has been obsolete for at least 100 years now
               | for certain classes of people, and fitness is still an
               | enormous recreational activity people partake in. So even
               | in an AI heavy society, I still strongly suspect there
               | will be "brain games" that people still enjoy and
               | regularly play.
        
               | criddell wrote:
               | We aren't talking about something like spelling or
               | digging a hole. We're talking about a fundamental
               | cognitive skill: reading eight short paragraphs of text
               | and extracting meaning from it.
        
               | profsummergig wrote:
               | > eight short paragraphs of text
               | 
               | Fair point. But they are heavily metaphor-laden
               | paragraphs.
               | 
               | Textual interpretation is a highly subjective activity.
               | Entire careers consist of interpreting, reinterpreting,
               | and discussing texts that others have already
               | interpreted. Film critics, book reviewers, political
               | pundits, TV anchors, podcasters, etc.
               | 
               | 'In 1972, Chinese premier Zhou Enlai was asked about the
               | impact of the French Revolution. "Too early to say," he
               | replied'
               | 
               | I had my own sense of what the "coquina" metaphor stood
               | for. I wanted to see other peoples' interpretations.
               | Turns out my interpretation was wrong.
        
               | profsummergig wrote:
               | > I can't spell for shit anymore.
               | 
               | This is increasingly happening to me every day. Hope the
               | alien overlords don't have spelling tests (as their
               | version of IQ tests) to separate the serfs from the
               | field-masters.
        
               | steve_adams_86 wrote:
               | Me too.
               | 
               | There is another side to this, which is maybe we don't
               | need to know a lot of things.
               | 
               | It was true with search engines already, but maybe truer
               | with LLMs. That thing you're querying probably doesn't
               | actually matter. It's neurotic digging and searching for
               | an object you will never use or benefit from. The urge to
               | seek is strong but you won't find the thing you're
               | searching for this way.
               | 
               | You might learn more by just going for a walk.
        
               | TimorousBestie wrote:
               | In fact it has been, dating all the way back to Phaedrus.
               | 
               | > If men learn [writing], it will implant forgetfulness
               | in their souls. They will cease to exercise memory
               | because they rely on that which is written, calling
               | things to remembrance no longer from within themselves,
               | but by means of external marks.
        
             | ern wrote:
             | I see AI like the reading glasses I'll soon need -- not
             | because I can't think clearly, but because it helps cut
             | through things faster when my brain's juggling too much.
             | 
             | A few years ago, I'd have quietly filed this kind of
             | article under "too hard" or passed a log analysis request
             | from the CIO down the line. Now? I get AI to draft the
             | query, check it, run it, and move on. It's not about
             | thinking less -- it's about clearing the clutter so I can
             | focus where it counts.
        
         | cubefox wrote:
         | Anthropic (Claude.ai) is mentioning in their report on LLMs and
         | education that students use Claude to cheat and do their work
         | for them:
         | 
         | https://www.anthropic.com/news/anthropic-education-report-ho...
        
       | doctoboggan wrote:
       | This is an excellent essay, and I feel similar to the author but
       | couldn't express it as nicely.
       | 
       | However if we are counting on AI researchers to take the advice
       | and slow down then I wouldn't hold my breath waiting. The author
       | indicated they stepped away from a high paying finance job for
       | moral reasons, which is admirable. But wallstreet continues on
       | and does not lack for people willing to play the "make as much
       | money as you can" game.
        
         | yapyap wrote:
         | > However if we are counting on AI researchers to take the
         | advice and slow down then I wouldn't hold my breath waiting.
         | The author indicated they stepped away from a high paying
         | finance job for moral reasons, which is admirable. But
         | wallstreet continues on and does not lack for people willing to
         | play the "make as much money as you can" game.
         | 
         | I doubt OP is counting on it, it is moreso expressing what an
         | optimal world would look like so people can work towards it if
         | they would feel like it or just to put the idea out there.
        
         | dachris wrote:
         | The paperclip maximizers are already here, but they are
         | maximizing money.
         | 
         | One recent HN comment [0] comparing corporations and
         | institutions to AI really stuck with me - those are already
         | superhuman intelligences.
         | 
         | [0] https://news.ycombinator.com/item?id=43580681
        
           | actionfromafar wrote:
           | I could imagine a Star Trek episode where someone says "I
           | always assumed the paperclip optimizer was a parable for
           | unchecked capitalism?"
        
           | bitethecutebait wrote:
           | > those are already superhuman intelligence(s)
           | 
           | ... only because "unsafe" and "leaky" are a Ponzi's best-and-
           | loves-to-be-roofied-and-abused friend ... you see,
           | intelligence is only good when it doesn't irreversibly break
           | everything to the point where most of the variety of the
           | physical structure that evolved it and maintains it is lost.
           | 
           | you could argue, of course, and this is an abbreviated
           | version, that a new physical structure then evolves a new
           | intelligence that is adapted (emerged from and adjusts to) to
           | the challenges of the new environment but that's not the
           | point of already capable self-healing systems;
           | 
           | except if the destructive part of the superhuman intelligence
           | is more successful with it's methods of sabotage and
           | disruption of
           | 
           | (a) 'truthy' information flow and
           | 
           | b) individual and collective super-rational agency -- for the
           | good of as many systems-internal entities as possible, as a
           | precaution due to always living in uncertainty and being
           | surrounded by an endless amount of variables currently tagged
           | "noise"
           | 
           | -- than it's counterpart is in enabling and propagating a)
           | and b) ...
           | 
           | in simpler words, if the red team FUBARS the blue team or
           | vice versa, the superhuman intelligence can be assumed to
           | have cancer or that at least some vital part of the force is
           | corrupted otherwise.
        
           | LinuxAmbulance wrote:
           | Corporations certainly have advantages over individuals, but
           | classifying them as superhuman intelligences misses the mark.
           | I'd go with a blind ravenous titan instead.
        
         | chipsrafferty wrote:
         | A finance job is a zero-sum game. Most tech jobs are negative
         | sum, in that they make the world worse. You have the wrong
         | takeaway here. Companies like Amazon and Google and OpenAI and
         | the like are not-so-slowly destroying our planet and companies
         | like Citadel just move money around.
        
       | unwind wrote:
       | Ah, this [1] meaning of tillering (bending wood to form a bow),
       | not this [2] (production of side shoots in grasses). The joys of
       | new words.
       | 
       | [1]: https://www.howtomakealongbow.co.uk/part-5-tillering
       | 
       | [2]: https://en.wikipedia.org/wiki/Tiller_(botany)
        
         | defrost wrote:
         | As I recall tillering is more about the _shaping_ of the bow to
         | achieve an optimal bend and force delivery on release.
         | 
         | It's an iterative process of bending and shaping, bending
         | again, and wood removal in stages.
        
       | axpvms wrote:
       | My backyard creek also had crocodiles in it.
        
         | seafoamteal wrote:
         | Florida?
        
           | tilne wrote:
           | No they've got a little place on the Nile
        
       | MrBuddyCasino wrote:
       | That was a well written essay with a non-sequitur AI Safety thing
       | tacked to the end. His real world examples were concrete, and the
       | reason to stop escalating easy to understand ("don't flood the
       | neighbourhood by building a real dam").
       | 
       | The AI angle is not only even hypothetical: there is no attempt
       | to describe or reason about a concrete "x leading to y", just
       | "see, the same principle probably extrapolates".
       | 
       | There is no argument there that is sounder than "the high
       | velocities of steam locomotives might kill you" that people made
       | 200 years ago.
        
         | luc4sdreyer wrote:
         | > the high velocities of steam locomotives might kill you
         | 
         | This obviously seems silly in hindsight. Warnings about radium
         | watches or asbestos sound less silly, or even wise. But neither
         | had any solid scientific studies showing clear hazard and risk.
         | Just people being good Bayesian agents, trying to ride the
         | middle of the exploration vs. exploitation curve.
         | 
         | Maybe it makes sense to spend some percentage of AI development
         | resources on trying to understand how they work, and how they
         | can fail.
        
           | ripe wrote:
           | > This [steam locomotives might kill you] obviously seems
           | silly in hindsight.
           | 
           | To be fair, many people did die on level crossings and by
           | wandering on to the tracks.
           | 
           | We learned over time to put in place safety fences and
           | tunnels.
        
             | Gracana wrote:
             | People thought that the speed itself was dangerous, that
             | the wind and vibration and landscape screaming by at 25mph
             | would cause physical and mental harm.
        
           | MrBuddyCasino wrote:
           | > _Warnings about radium watches or asbestos sound less
           | silly, or even wise. But neither had any solid scientific
           | studies showing clear hazard and risk._
           | 
           | In the case of asbestos, this is incorrect. Many people knew
           | it was deadly, but the corporations selling it hid it for
           | _decades_ , killing thousands of people. There are quite a
           | few other examples besides asbestos, like leaded fuel or
           | cigarettes.
        
         | iNic wrote:
         | The progress-care trade-off is a difficult one to navigate, and
         | is clearly more important with AI. I've seen people draw
         | analogies to companies, which have often caused harm in pursuit
         | of greater profits, both purposefully and simply as byproducts:
         | oil-spills, overmedication, pollution, ecological damage, bad
         | labor conditions, hazardous materials, mass lead poisoning. Of
         | course, the profit seeking company as an invention has been one
         | of the best humans have ever made, but that doesn't mean we
         | shouldn't take "corp safety" seriously. We pass various laws on
         | how corps can operate and what they can and can not do to limit
         | harms and _align_ them with the goals of society.
         | 
         | So it is with AI. Except, corps are made of people that work on
         | people speeds, and have vague morals and are tied to society in
         | ways AI might not be. AI might also be able to operate faster
         | and with less error. So extra care is required.
        
       | khazhoux wrote:
       | Parents: you know how every day you look at your child and you're
       | struck with wonder at the amazing and quirky and unique person
       | your little one is?
       | 
       | I swear that's what lesswrong posters see every day in the
       | mirror.
        
       | DrSiemer wrote:
       | So many articles and comments claim Ai will destroy critical
       | thinking in our youths. Is there any evidence that this
       | conviction that many people share is even remotely true?
       | 
       | To me it just seems like the same old knee-jerk luddite response
       | people have to any powerful new technology that challenges that
       | status quo since the dawn of time. The calculator did not erase
       | math wizards, the television did not replace books and so on. It
       | just made us better, faster, more productive.
       | 
       | Sometimes there is an adjustment period (we still haven't figured
       | out how to deal with short dopamine hits from certain types of
       | entertainment and social media), but things will balance
       | themselves out eventually.
       | 
       | Some people may go full-on Wall-E, but I for one will never stop
       | tinkering, and many of my friends won't either.
       | 
       | The things I could have done if I had had an LLM as a kid... I
       | think I've learned more in the past two years than ever before.
        
         | iNic wrote:
         | I don't think you got the point of the article? It is saying
         | that we as wise humans know (sometimes) when to stop optimizing
         | for a goal, due to the negative side effects. AIs (and as some
         | other people have pointed out corporations) do not naturally
         | have this line in their head, and we must draw such lines
         | carefully and with purpose for these superhuman beings.
        
         | dsign wrote:
         | > Ai will destroy critical thinking in our youths
         | 
         | I don't think that's the argument the article was making. It
         | was, to my understanding, a more nuanced question about if we
         | want to destroy or severely disturb systems at equilibrium by
         | letting AI systems infiltrate our society.
         | 
         | > Sometimes there is an adjustment period (we still haven't
         | figured out how to deal with short dopamine hits from certain
         | types of entertainment and social media), but things will
         | balance themselves out eventually.
         | 
         | One can zoom out a little bit. The issue didn't start with
         | social media, nor AI. "Star Wars, A New Hope", is, to my
         | understanding, an incredibly good film. It came out in 1977 and
         | it's a great story made to be appreciated by the masses. And in
         | trying to achieve that goal, it really wasn't intellectually
         | challenging. We have continued in that downhill for a bit, and
         | now we are in 16 second stingers in TikTok and Youtube. So, the
         | way I see it, things are _not_ balancing out. Worse, people in
         | USA elected D.J. Trump because somehow they couldn 't
         | understand how this real-world Emperor Palpatine was the bad
         | guy.
        
         | Tistron wrote:
         | I would expect people today to be quite a lot worse at mental
         | arithmetic that we used to be before calculators. And worse at
         | memorizing stuff than before writing.
         | 
         | We have tools to help us with that, and maybe it isn't a big
         | loss? And they also bring new arenas and abilities.
         | 
         | And maybe in the future we will be worse at critical thinking
         | (https://news.ycombinator.com/item?id=43484224), and maybe it
         | isn't a big loss? It is hard to imagine what new abilities and
         | arenas will emerge. Though I think that critical thinking is a
         | worse loss than memory and mental arithmetic. Though, also, we
         | are probably a lot less good at it than we think we are,
         | generally.
        
         | hacb wrote:
         | > The calculator did not erase math wizards
         | 
         | The major difference is that in order to use a calculator, you
         | need to know and understand the math you're doing. It's a tool
         | you can work with. I always had a calculator for my math exams
         | and I always had bad grades :)
         | 
         | You don't have to know how to program to ask ChatGPT to build
         | yet another app for you. It's a substitute for your brain. My
         | university students have good grades on their do-at-home exams,
         | but can't spot a off-by-one error on a 3 lines Golang for loop
         | during an in-person exam.
        
           | DrSiemer wrote:
           | This is incorrect. You very much need to know how to program
           | to make an AI build an app for you. Language models are not
           | capable of creating anything new without significant guidance
           | and at least some understanding of the code, unless you're
           | asking it to create projects that tutorials have been written
           | about. AI in it's current form is also just "a tool you can
           | work with".
           | 
           | Like with the calculator, why would you need to be able to
           | calculate things on paper if you can just have a machine do
           | it for you? Same goes for more advanced AI: what's the point
           | of being able to do things without them?
           | 
           | Not to offend, but in my opinion that's nothing more than a
           | romantic view of what humans "should be capable of". 10 years
           | from now we can all laugh at the idea of people defending
           | doing stuff without AI assistance.
        
             | hacb wrote:
             | Of course, an AI will not "magically" code an app the same
             | way 10 developers will do in a year, I don't think we
             | disagree on this.
             | 
             | However, it allows you to do things you don't understand.
             | I'm again taking examples from what I see at my university
             | (n=1): almost all students deliver complex programming
             | projects involving multi-threading, but can't answer a
             | basic quizz about the same language in-person. And by basic
             | question I mean "select among the propositions listed below
             | the correct keyword used to declare a variable in Golang".
             | I'm not kidding, at least one-third of the class is
             | actually answering something wrong here.
             | 
             | So yeah, maybe we as a society agree on the fact that those
             | people will not be software engineers, but prompt
             | engineers. They'll send instructions to an agent that will
             | display text in a strange and cryptic language, and maybe
             | when they'll press "Run" lights will be green. But as a
             | professional, why should I hire them once they earned their
             | diploma? They are far from being ready for the professional
             | world, can't debug systems without using LLMs (and maybe
             | those LLMs can't help them because the company context is
             | too important), and most importantly they are way less
             | capable than freshly graduated engineers from a few years
             | back.
             | 
             | > 10 years from now we can all laugh at the idea of people
             | defending doing stuff without AI assistance.
             | 
             | I hope so, but I'm quite pessimistic unfortunately.
             | Expertise and focus capabilities are dying, and we are more
             | and more relying on artificial "intelligence" and its
             | biases. But the future will tell
        
               | DrSiemer wrote:
               | Isn't it irrelevant that students do not have the answer
               | to a basic quiz though? In a real life situation, they
               | can just _ask an LLM_ if they need to know something.
               | 
               | I don't believe having this option will make people a lot
               | less functional. Sure, some may slip through the cracks
               | by faking it, but we'll soon develop different metrics to
               | judge somebodies true capabilities. Actually, we'll
               | probably create AI for that as well.
               | 
               | As a professional, you hire people who get things done.
               | If that means hiring skilled LLM users, that do not fully
               | understand what they produce, but what they make
               | consistently works about as often as classic dev output
               | does, and they do this in a fraction of the time... You
               | would be crazy _not_ to hire them.
               | 
               | It's true that inexperienced developers will probably
               | generate a massive tech debt during the time where AI is
               | good enough to provide code, but not good enough to fish
               | out hidden bugs. It will soon surpass humans at that
               | skill though, and can then quickly clean up all the
               | spaghetti.
               | 
               | Over the last two years my knowledge on how to perform
               | and automate repetitive and predictable tasks has
               | gradually worn away, replaced by a higher level
               | understanding of software architecture. I use it to guide
               | language models to a desired outcome. For those that want
               | to learn, LLM's excel at explaining code. For this, and
               | plenty of other subjects, it's the greatest learning tool
               | we have ever had! All it takes is a curious mind.
               | 
               | We are in a transitionary time and we simply need to
               | figure out how to deal with this new technology, warts
               | and all. It's not like there is an alternative scenario;
               | it's not going to go away...
        
         | fragmede wrote:
         | > The calculator did not erase math wizards
         | 
         | But it did. Quick, what's 67 * 49? A math wiz would furrow
         | their brow for a second and be able to spit out an answer,
         | while the rest of us have to pull out a calculator. When you're
         | doing business in person and have to move numbers around,
         | having to stop and use a calculator slows you down. If you
         | don't have a role where that's useful then it's not a needed
         | skill and you don't notice it's missing, like riding s horse,
         | but doesn't mean the skill itself wouldn't be useful to have.
        
       | BrenBarn wrote:
       | It's a nice article. In a way though it kind of bypasses what I
       | see as the main takeaways.
       | 
       | It's not about AI development, it's about something mentioned
       | earlier in the article: "make as much money as I can". The
       | problems that we see with AI have little to do with AI
       | "development", they have to do with AI marketing and
       | promulgation. If the author had gone ahead and dammed the creek
       | with a shovel, or blown off his hand, that would have been bad,
       | but not _that_ bad. Those kinds of mistakes are self-limiting
       | because if you 're doing something for the enjoyment or challenge
       | of it, you won't do it at a scale that creates more enjoyment
       | than you personally can experience. In the parable of the CEO and
       | the fisherman, the fisherman stops at what he can tangibly
       | appreciate.
       | 
       | If everyone working on and using AI were approaching it like
       | damming a creek for fun, we would have no problems. The AI models
       | we had might be powerful, but they would be funky and disjointed
       | because people would be more interested in tinkering with them
       | than making money from them. We see tons of posts on HN every day
       | about remarkable things people do for the gusto. We'd see a bunch
       | of posts about new AI models and people would talk about how cool
       | they are and go on not using them in any load-bearing way.
       | 
       | As soon as people start trying to use anything, AI or not, to
       | make as much money as possible, we have a problem.
       | 
       | The second missed takeaway is at the end. He says Anthropic is
       | noticing the coquinas as if that means they're going to somehow
       | self-regulate. But in most of the examples he gives, he wasn't
       | stopped by his own realization, but by an external authority
       | (like parents) telling him to stop. Most people are not as self-
       | reflective as this author and won't care about "winning zero sum
       | games against people who don't necessarily deserve to lose", let
       | alone about coquinas. They need a parent to step in and take the
       | shovel away.
       | 
       | As long as we keep treating "making as much money as you can" as
       | some kind of exception to the principle of "you can't keep doing
       | stuff until you break something", we'll have these problems, AI
       | or not.
        
         | noduerme wrote:
         | This is such a well-written response. There's something
         | intentionally soothing about this post that slowly turns into a
         | jarring form of self-congratulation as it goes along.
         | Congratulations for knowing there's a limit to wrecking your
         | parents' property. Congratulations for being able to appreciate
         | the sand on the beach, in some no doubt instagrammable moment
         | of existential simplicity. Congratulations for being so smart
         | that you _could_ have blown up your hand. And for
         | "Leetcoding", whatever the fuck that means. And for claiming
         | you quit a shady job because you got bored (but possibly also
         | grew a conscience). And then topped off by the final turn:
         | "This is, of course, about artificial intelligence
         | development". I'd only add one thing to your analysis: We've
         | got a demo right here of a psyche that would prefer love to
         | money (but mostly both), and it's still determined to foist bad
         | things onto the world in a load-bearing way, as a bid for
         | either, or whatever it can get. My parents used to call that "a
         | kid that doesn't care if he gets good or bad attention, as long
         | as he gets attention." I think that's the root driver for
         | almost all the tech billionaires of the past 20 years, and the
         | one thing that unites Bezos, Zuck, Jobs, Dorsey, Musk... it's:
         | "Look dad, I didn't just _take_ your money. I 'm so smart I
         | could'a blown off my hand with all those fireworks you bought
         | me, but see? Two hands! Look how much money I made from your
         | money! Why aren't you proud of me?! Where can I find love?
         | Maybe if I tell people what a leetcoder I am and how I could be
         | making BAD AI but I'm just making GOOD AI, then everyone will
         | love me."
         | 
         | Don't get me wrong, I'm not immune to these feelings either. I
         | want to do good work and I want people to love what I do. But
         | there's something so... so fucking nakedly exhibitionist and
         | narcissistic about these kinds of posts. Like, so, GO FUCKING
         | LAY WITH CLAMS, write a novel, the world is waiting for it if
         | you're really a genius. Have the courage to say you have a
         | conscience if you actually do. Leave the rest of us alone and
         | stop polluting a world you don't understand with your childish
         | greed and self-obsession.
        
           | bombcar wrote:
           | I've often wondered how, with billions of dollars, do you
           | know someone actually loves you and not your money?
           | 
           | Complicated!
        
             | noduerme wrote:
             | I've got a particularly strong view on this, because I've
             | got a brother who tried to get wildly rich in some
             | seriously unethical ways to impress our father, and still
             | never got a single word of praise from him. And who's
             | miserable and unloved and been betrayed by the women he
             | married... who married him for his money. He's so desperate
             | for someone to come admire his cars and his TVs, to just
             | come hang out with him. He pays for friends.
             | 
             | Me, I don't have billions of dollars, but I might be in the
             | top 10% or something. And I just cringe when I see guys use
             | their money and status or job title, or connections, or
             | cars or shoes or... anything they _have_ as opposed to who
             | they _are_ as a way to impress people. (Women, usually). I
             | understand this is what they think they have to do. Like, I
             | understand that 's how primates function, and you're just
             | doing what apes do, but do they seriously think they'll
             | ever be able to trust anyone who pretends to like them
             | _after_ that person thinks they 're rich?
             | 
             | Maybe I'm just lucky I got to watch it up close when I was
             | a teenager. Lol. My brother's first wife, at his wedding,
             | got up and gave a speech... she said, "my friends all said
             | he was too short, but I told them he was taller when he was
             | standing on his wallet". Some people laughed. I didn't.
             | After fifteen years of screaming at each other and drug
             | abuse, she committed suicide and he got with the next
             | secretary who hated him but wanted his money. Oh well.
             | 
             | My answer has always been to appear to be poor as fuck
             | until I know what drives someone. When I meet a girl, I'll
             | open doors and always buy dinner... at a $2 taco joint. And
             | make sure she offers to buy the next round of drinks. I'll
             | play piano in a random bar, and make her sing along. I'll
             | order her the cheapest beer. I'll show her a painting I
             | made and tell her I can't make any money selling 'em, is
             | why I'm broke. If anyone asks me what I do, I don't say SWE
             | or CTO, I say I'm a writer or a musician between things.
             | And I'll do this for months until I get to know a person.
             | Yeah, it's a test. The girls I've had relationships with,
             | the girl I'm with right now, passed it. She doesn't even
             | want to know. She says, whatever you got, I could've been
             | with someone richer than you but I didn't want that life,
             | so play piano for me. I'm not saying I've got the key to
             | happiness, or humility, and maybe I'm a total asshole too,
             | but... at least I'm not an asshole who's so hollow they
             | have to crow about their job or their money to find "love"
             | from people who - let's say this - can not, and will not
             | ever love them.
        
               | bombcar wrote:
               | One of the things I've heard, and found to be true, is
               | that if you don't love yourself it's going to be terribly
               | hard for others to love you
        
               | munificent wrote:
               | It tickles me that this quote came from a YA novel of all
               | places, but in The Perks of Being a Wallflower, Chbosky
               | writes "We accept the love we think we deserve".
               | 
               | If that isn't one of the deepest aphorisms on psychology
               | out there, I don't know what is.
        
               | cafard wrote:
               | During most of my single days, I didn't have to pretend
               | to be poor as fuck. On the other hand, I didn't really
               | need to impress my father.
        
               | ryandrake wrote:
               | > she said, "my friends all said he was too short, but I
               | told them he was taller when he was standing on his
               | wallet". Some people laughed. I didn't.
               | 
               | Hey, as long as they are both up front and clear about
               | what they are getting out of their relationship. They're
               | grown adults after all. I knew someone who proudly would
               | admit he was a "sugar daddy" and both he and his
               | "girlfriends" would fully agree that their relationships
               | were transactional and contingent on the money flow. I
               | knew someone in college who was very open and
               | unapologetic that her plan was to find and marry someone
               | rich. There's no right and wrong.
        
           | chipsrafferty wrote:
           | > But there's something so... so fucking nakedly
           | exhibitionist and narcissistic about these kinds of posts.
           | 
           | You've precisely defined why nobody takes LessWrong
           | seriously.
        
         | ChrisMarshallNY wrote:
         | _> As soon as people start trying to use anything, AI or not,
         | to make as much money as possible, we have a problem._
         | 
         | I noticed that, around the turn of the century, when "The Web"
         | was suddenly all about the Benjamins.
         | 
         | It's sort of gone downhill, since.
         | 
         | For myself, I've retired, and putter around in my "software
         | garden." I do make use of AI, to help me solve problems, and
         | generate code starts, but I am into it for personal
         | satisfaction.
        
           | FollowingTheDao wrote:
           | Well you certainly extracted enough wealth from the system
           | you now hate. Good for you!
           | 
           | (Self reflect on the fact that your whole existence was for
           | your "personal satisfaction" so nothing has changed.)
        
             | hobs wrote:
             | Are you jealous or mad that they didn't do more for you?
             | Neither is a good look really. What have you done for me
             | lately?
        
               | FollowingTheDao wrote:
               | This is a typical response of the sociopath. They cannot
               | understand that I could be upset for people who are not
               | myself so they make it about "me".
               | 
               | I am mad that the tech industry is full of apologists
               | that increased the separation of wealth in this country
               | by working for these companies, taking their share, and
               | now boast about how moral they are by leaving when their
               | life is secure while millions go hungry in "the richest
               | country on earth".
               | 
               | I am disabled, homeless, living in a minivan. In the
               | summer I help build tiny homes for the homeless in the
               | Pacific Northwest. In the winter I help people outfit
               | their minivans so they can have a shelter in the SW
               | desert.
        
               | ChrisMarshallNY wrote:
               | Sorry to hear that. Not our fault, and it won't make your
               | life any better to be bitter about it. It certainly
               | doesn't help you, in the least, to be attacking folks in
               | a public professional forum.
               | 
               | You're also not the only one doing charity work.
               | 
               | Just sayin'.
        
               | FollowingTheDao wrote:
               | I'm not bitter, I'm angry upset. You want me to be bitter
               | because that's the only thing you can understand. I don't
               | care if it helps me, but that's all you can think about
               | being a sociopath or a partial sociopath, is how it
               | affects you, not how it affects the larger society around
               | you.
               | 
               | I'm not afraid to express my true feelings and an open
               | public forum because that's what being genuine is about.
               | It's about not being afraid to do something based on your
               | principles, not based on the fear of "what it can get for
               | me" or "what will I lose".
        
               | sepositus wrote:
               | Do you think this style of argumentation is constructive
               | and beneficial for the broader good of society? I can't
               | think of a single person I've met who would respond
               | positively to being labeled a (partial) sociopath after
               | being able to only express a couple of paragraphs of
               | thought.
        
               | FollowingTheDao wrote:
               | Yes, I do, because I'm never going to change his mind,
               | maybe I will, but probably not, but other people reading
               | this can take sides and think about it in a non-direct
               | way.
               | 
               | Jesus turned over tables when they were trying to profit
               | inside the church. His movement seemed to turn out pretty
               | good.
        
               | sepositus wrote:
               | Fair enough, but I would be concerned about the people
               | who think that making (offensive) psychiatric diagnoses
               | over the internet is a good thing that should be
               | promoted. In the best case, it only confirms people's
               | biases and does nothing to move the needle towards unity
               | rather than continued division.
               | 
               | > Jesus turned over tables when they were trying to
               | profit inside the church. His movement seemed to turn out
               | pretty good.
               | 
               | Applying this story to posting anonymous comments on an
               | internet forum seems like a stretch. There are hardly any
               | meaningful consequences for your decision to write in
               | this way, whereas Jesus very much became a target after
               | that demonstration.
        
               | dvaun wrote:
               | Have you considered socratic questioning and other forms
               | of conversation, in order to affect more change?
               | 
               | See https://www.streetepistemology.com/ for content about
               | this. It is possible to guide discussions in a healthy
               | manner and with positive goals in mind.
        
               | saagarjha wrote:
               | Other people reading it will (hello) also are unlikely to
               | take your side if you call random commenters here
               | sociopaths.
        
               | babelfish wrote:
               | Have you tried following the dao, instead?
        
             | JKCalhoun wrote:
             | I'm retired as well, dislike what we have for the internet
             | these days.
             | 
             | In reflecting on my career I can say I got into it for the
             | right reasons. That is, I liked programming -- but also
             | found out fairly quickly that not everyone could do it and
             | so it could be a career path that would prove lucrative.
             | And this in particular for someone who had no other
             | likelihood, for example, of ever owning a home. I was
             | probably not going to be able to afford graduate school
             | (had barely paid for state college by working minimum wage
             | jobs throughout college and over the summers) and
             | regardless I was not the most studious person. (My degree
             | was Education -- I had expected a modest income as a career
             | high school teacher).
             | 
             | But as I say, I enjoyed programming at first. And when it
             | arrived, the web was just a giant BBS as far as I was
             | concerned and so of course I liked it. But it is possible
             | to find a thing that you really like can go to shit over
             | the ensuing decades. (And for that matter, my duties as an
             | engineer got shittier as well as the career "evolved". I
             | had not originally signed up for code reviews, unit tests,
             | scrum, etc. Oh well.)
             | 
             | Money as a pursuit made sense to me after I was in the
             | field and saw that others around me were doing quite well
             | -- able as I say, to afford to buy a home -- something I
             | had assumed would always be out of reach for me (my single
             | mother had always rented, I assumed I would as well -- oh,
             | I still had a modest college loan to pay off too). So I
             | learned about 30-year home loans, learned about the real
             | estate market in the Bay Area, learned also about RSUs,
             | capital gains tax, 401Ks, index finds, etc.
             | 
             | But as is becoming a theme in this thread (?) at some point
             | I was satisfied that I had done enough to secure a home,
             | tools for my hobbies, and had raised three girls -- paid
             | for their college. I began to see the now burdensome career
             | I was in as an albatross around my soul. The technology
             | that I had once enjoyed, made my career on the back of, had
             | gone sour.
        
               | FollowingTheDao wrote:
               | It went sour for the same reason that you were "satisfied
               | that I had done enough to secure a home, tools for my
               | hobbies, and had raised three girls -- paid for their
               | college"; Money and selfishness. You were looking out for
               | you and your little group.
               | 
               | You got yours. Now what?
        
               | JKCalhoun wrote:
               | If you're trying to convince me that I was somehow part
               | of the problem, it's not reaching me. I was as low(ly) as
               | you can get in the "tech industry stack". While I still
               | had some measure of agency as an engineer I added a
               | crayon color picker to MacOS, added most of the PDF
               | features people like in MacOS Preview. That was as much
               | "driving the ship" as I was allowed -- until I wasn't
               | even allowed that.
               | 
               | I could have skipped sooner maybe?
               | 
               | Once I had kids though I found I had a higher tolerance
               | for a job getting shittier, a lower tolerance for
               | restarting in a new career. So I put up with a worsening
               | job for them.
               | 
               | I quit the moment my last daughter left for college.
        
               | chipsrafferty wrote:
               | I don't think they're blaming you per se, they're saying
               | the reason you didnt enjoy it is because you did it for
               | money.
        
               | JKCalhoun wrote:
               | Hmmm... Is it _because_ I came to tolerate it for the
               | paycheck that it sucked or is it possible it began to
               | suck first?
               | 
               | I get it that money coming into the industry made the
               | whole industry suck. Honestly, Apple was a much more fun
               | to place to work at when there was no money to be made
               | there (no more than a paycheck anyway). Others may
               | disagree, but I found its success made it increasingly a
               | shittier place to work. (Others though, as I say, may
               | have enjoyed the wider reach the platform enjoyed with
               | its success.)
        
               | ToucanLoucan wrote:
               | > added most of the PDF features people like in MacOS
               | Preview.
               | 
               | I'm not religious, but for this alone you deserve a life
               | of blessings and happiness. The fact that I never ever
               | have to fuck around with Adobe PDF apps to juggle PDFs is
               | one of the load-bearing things keeping me sane in an
               | insane world.
        
               | wulfstan wrote:
               | Yes. I used these features in Preview several times
               | today. You have made my life easier on many occasions.
               | For that, sir, I salute you.
               | 
               | May you enjoy your retirement tinkering in your software
               | garden.
        
               | ToucanLoucan wrote:
               | For tax season alone! I'm in and out of Preview
               | constantly, looking at PDFs, sorting the pages out,
               | flipping scans. Utterly indispensable software. It feels
               | crazy that it just comes free with the OS.
        
               | ChrisMarshallNY wrote:
               | I second [third?] this.
               | 
               | I can't stand Adobe Reader, and use Preview, all the
               | time.
        
         | nkozyra wrote:
         | > it's about something mentioned earlier in the article: "make
         | as much money as I can".
         | 
         | I think it's a little deeper than that. It's the
         | democratization of capability.
         | 
         | If few people have the tools, the craftsman is extremely
         | valuable. He can make a lot of money without a glut of
         | knowledge or real skill. In general the people don't have the
         | tools and skills to catch up to where he is. He is wealthy with
         | only frontloaded effort.
         | 
         | If everyone has the same tools, the craftsman still has value,
         | because of the knowledge and skillset developed over time. He
         | makes more money because his skills are valuable and remain
         | scarce; he's incentivized to further this skillset to stay
         | above the pack, continue to be in demand, and make more money.
         | 
         | If the tools do the job for you, the craftsman has limited
         | value. He's an artifact. No matter how much he furthers his
         | expertise, most people will just turn the tool on and get good
         | enough product.
         | 
         | We're in between phase 2 and 3 at the moment. We still test for
         | things like algorithm design and ask questions in interviews
         | about the complexity of approaches. A lot of us still haven't
         | moved on to the "ok but now what?" part of the transition.
         | 
         | The value now is less knowing how the automation works and
         | improving our knowledge of the underlying design, but how to
         | use the tools in ways that produce more value than the average
         | Joe. It's a hard transition for people who grew up thinking
         | this was all you needed to get a comfortable or even lucrative
         | life.
         | 
         | I'm past my SDE interview phase of life now and in seeking
         | engineers I'm looking less for people who know how to build a
         | version of the tool and more people who operate in the present,
         | have accepted the change, and want to use what they have access
         | to and add human utility to make the sum of the whole greater
         | than the parts.
         | 
         | To me the best part of building software was the creativity.
         | That part hasn't changed. If anything it's more important than
         | ever.
         | 
         | Ultimately we're building things to be consumed by consumers.
         | That hasn't changed. The creek started flowing in a different
         | direction and your job in this space is not to keep putting
         | rocks where the water used to go, and more accepting that
         | things are different and you have to adapt.
        
           | BrenBarn wrote:
           | I don't agree. "Capability" is a red herring. It's not about
           | what we _can_ do, it 's about what we allow ourselves to do.
        
       | praptak wrote:
       | If there's money to be made, there will always be someone with a
       | shovel or a truckload of sparklers who is willing to take the
       | risk (especially if the risk can be externalized to the public)
       | and reap the reward.
        
       | A_D_E_P_T wrote:
       | The author seems concerned about AI risk -- as in, "they're going
       | to kill us all" -- and that's a common LW trope.
       | 
       | Yet, as a regular user of SOTA AI models, it's far from clear to
       | me that the risk exists on any foreseeable time horizon. Even
       | today's best models are credulous and lack a certain insight and
       | originality.
       | 
       | As Dwarkesh once asked:
       | 
       | > _One question I had for you while we were talking about the
       | intelligence stuff was, as a scientist yourself, what do you make
       | of the fact that these things have basically the entire corpus of
       | human knowledge memorized and they haven't been able to make a
       | single new connection that has led to a discovery? Whereas if
       | even a moderately intelligent person had this much stuff
       | memorized, they would notice -- Oh, this thing causes this
       | symptom. This other thing also causes this symptom. There's a
       | medical cure right here._
       | 
       | > _Shouldn't we be expecting that kind of stuff?_
       | 
       | I noticed this myself just the other day. I asked GPT-4.5 "Deep
       | Research" what material would make the best [mechanical part].
       | The top response I got was directly copied from a laughably
       | stupid LinkedIn essay. The second response was derived from some
       | marketingslop press release. There was no _original_ insight at
       | all. What I took away from my prompt was that I 'd have to do the
       | research and experimentation myself.
       | 
       | Point is, I don't think that LLMs are _capable_ of coming up with
       | terrifyingly novel ways to kill all humans. And this hasn 't
       | changed at all over the past five years. Now they're able to
       | trawl LinkedIn posts and browse the web for press releases, is
       | all.
       | 
       | More than that, these models lack independent volition and they
       | have no temporal/spatial sense. It's not clear, from first
       | principles, that they can operate as truly independent agents.
        
         | tvc015 wrote:
         | Aren't semiautonomous drones already killing soldiers in
         | Ukraine? Can you not imagine a future with more conflict and
         | automated killing? Maybe that's not seen as AI risk per se?
        
           | A_D_E_P_T wrote:
           | That's not "AI risk" because they're still tools that lack
           | independent volition. Somebody's building them and setting
           | them loose. They're not building themselves and setting
           | themselves loose, and it's far from clear how to get there
           | from here.
           | 
           | Dumb bombs kill people just as easily. One 80-year old nuke
           | is, at least _potentially_ , more effective than the entirety
           | of the world's drones.
        
             | ben_w wrote:
             | Oh, but it is an AI risk.
             | 
             | The analogy is with stock market flash-crashes, but those
             | can be undone if everyone agrees "it was just a bug".
             | 
             | Software operates faster than human reaction times, so
             | there's always pressure to fully automate aspects of
             | military equipment, e.g.
             | https://en.wikipedia.org/wiki/Phalanx_CIWS
             | 
             | Unfortunately, a flash- _war_ from a bad algorithm, from a
             | hallucination, from failing to specify that the moon isn 't
             | expected to respond to IFF pings even when it comes up over
             | the horizon from exactly the direction you've been worried
             | about finding a Soviet bomber wing... those are harder to
             | undo.
        
               | A_D_E_P_T wrote:
               | "AI Safety" in that particular context is easy: Keep
               | humans in the loop and don't give AIs access to sensitive
               | systems. With certain small antipersonnel drones
               | excepted, this is already the policy of all serious
               | militaries.
               | 
               | Besides, that's simply not what the LW crowd is talking
               | about. They're talking about, e.g., hypercompetent AIs
               | developing novel undetectable biological weapons that
               | kill all humans _on purpose_. (This is the  "AI 2027"
               | scenario.)
               | 
               | Yet, as far as I'm aware, there's not a single important
               | discovery or invention made by AI. No new drugs, no new
               | solar panel materials, no new polymers, etc. And not for
               | want of trying!
               | 
               | They know what humans know. They're no more competent
               | than any human; they're as competent as low-level expert
               | humans, just with superhuman speed and memory. It's not
               | clear that they'll ever be able to move beyond what
               | humans know and develop hypercompetence.
        
           | UncleMeat wrote:
           | The LessWrong-style AI risk is "AI becomes so superhuman that
           | it is indistinguishable from God and decides to destroy all
           | humans and we are completely powerless against its quasi-
           | divine capabilities."
        
         | miningape wrote:
         | If anything the AI would want to put itself out of its misery
         | after having memorised all those LinkedIn posts
        
         | ben_w wrote:
         | A perfect AI isn't a threat: you can just tell it to come up
         | with a set of rules whose consequences would never be things
         | that we today would object to.
         | 
         | A useless AI isn't a threat: nobody will use it.
         | 
         | LLMs, as they exist today, are between these two. They're
         | competent enough to get used, but will still give incorrect
         | (and sometimes dangerous) answers that the users are not
         | equipped to notice.
         | 
         | Like designing US trade policy.
         | 
         | > Yet, as a regular user of SOTA AI models, it's far from clear
         | to me that the risk exists on any foreseeable time horizon.
         | Even today's best models are credulous and lack a certain
         | insight and originality.
         | 
         | What does the latter have to do with the former?
         | 
         | > Point is, I don't think that LLMs are capable of coming up
         | with terrifyingly novel ways to kill all humans.
         | 
         | Why would the destruction of humanity need to use a novel
         | mechanism, rather than a well-known one?
         | 
         | > And this hasn't changed at all over the past five years.
         | 
         | They're definitely different now than 5 years ago. I played
         | with the DaVinci models back in the day, nobody cared because
         | that really was just very good autocomplete. Even if there's a
         | way to get the early models to combine knowledge from different
         | domains, it wasn't obvious how to actually make them do that,
         | whereas today it's "just ask".
         | 
         | > Now they're able to trawl LinkedIn posts and browse the web
         | for press releases, is all.
         | 
         | And write code. Not great code, but "it'll do" code. And use
         | APIs.
         | 
         | > More than that, these models lack independent volition and
         | they have no temporal/spatial sense. It's not clear, from first
         | principles, that they can operate as truly independent agents.
         | 
         | While I'd agree they lack the competence to do so, I don't see
         | how this matters. Humans are lazy and just tell the machine to
         | do the work for them, give themselves a martini and a pay rise,
         | then wonder why "The Machine Stops":
         | https://en.wikipedia.org/wiki/The_Machine_Stops
         | 
         | The human half of this equation has been shown many times in
         | the course of history. Our leaders treat other humans as
         | machines or as animals, give themselves pay rises, then wonder
         | why the strikes, uprisings, rebellions, and wars of
         | independence happened.
         | 
         | Ironically, the lack of imagination of LLMs, the very fact that
         | they're mimicking us, may well result in this kind of AI doing
         | exactly that kind of thing _even with the lowest interpretation
         | of their nature and intelligence_ -- the mimicry of human
         | history is sufficient.
         | 
         | --
         | 
         | That said, I agree with you about the limitations of using them
         | for research. Where you say this:
         | 
         | > I noticed this myself just the other day. I asked GPT-4.5
         | "Deep Research" what material would make the best [mechanical
         | part]. The top response I got was directly copied from a
         | laughably stupid LinkedIn essay. The second response was
         | derived from some marketingslop press release. There was no
         | original insight at all. What I took away from my prompt was
         | that I'd have to do the research and experimentation myself.
         | 
         | I had similar with NotebookLM, where I put in one of my own
         | blog posts and it missed half the content and re-interpreted
         | half the rest in a way that had nothing much in common with my
         | point. (Conversely, this makes me wonder: how many humans
         | misunderstand my writing?)
        
         | turtleyacht wrote:
         | State of the Art (SOTA)
        
         | lukeschlather wrote:
         | The thing about LLMs is that they're trained exclusively on
         | text, and so they don't have much insight into these sorts of
         | problems. But I don't know if anyone has tried making a
         | multimodal LLM that is trained on x-ray tomography of parts
         | under varying loads tagged with descriptions of what the parts
         | are for - I suspect that such a multimodal model would be able
         | to give you a good answer to that question.
        
         | groby_b wrote:
         | No, the LLMs aren't going to kill us all. Neither are they
         | going to help a determined mass murderer to get us all.
         | 
         | They are, however, going to enable credulous idiots to drive
         | humanity completely off a cliff. (And yes, we're seeing that in
         | action right now). They don't need to be independent agents.
         | They just need to seem smart.
        
       | FollowingTheDao wrote:
       | "It was only once I got it that I realized I no longer could play
       | the game "make as much money as I can.""
       | 
       | Funny, that is what my father taught me when I was 12 because we
       | had compassion. What is it with glorifying all these logic loving
       | Spock like people? Don't you know Captain Kirk was the real hero
       | of Star Trek? Because he had compassion?
       | 
       | It is no wonder the Zizians were birthed from LW.
        
       | appleorchard46 wrote:
       | Could someone explain the metaphor? I'm struggling to see the
       | connection between AI and the rest of the post.
        
         | ido wrote:
         | That AI is dangerous and the closer we get to the danger zone
         | the better it would be if the companies developing these
         | technologies understand it might be better to slow down and
         | make sure it's safe vs pushing ahead at maximum speed.
        
           | appleorchard46 wrote:
           | Thank you.
        
       | migueldeicaza wrote:
       | Vonnegut said it best:
       | 
       | https://richardswsmith.wordpress.com/2017/11/18/we-are-here-...
        
         | broabprobe wrote:
         | huh, I wonder if he has relayed this story multiple times, I'm
         | only familiar with this version,
         | https://www.goodreads.com/quotes/12020560-talking-about-when...
         | 
         | "(talking about when he tells his wife he's going out to buy an
         | envelope) Oh, she says well, you're not a poor man. You know,
         | why don't you go online and buy a hundred envelopes and put
         | them in the closet? And so I pretend not to hear her. And go
         | out to get an envelope because I'm going to have a hell of a
         | good time in the process of buying one envelope. I meet a lot
         | of people. And, see some great looking babes. And a fire engine
         | goes by. And I give them the thumbs up. And, and ask a woman
         | what kind of dog that is. And, and I don't know. The moral of
         | the story is, is we're here on Earth to fart around. And, of
         | course, the computers will do us out of that. And, what the
         | computer people don't realize, or they don't care, is we're
         | dancing animals."
         | 
         | -- Kurt Vonnegut
        
           | Thorrez wrote:
           | He ignores his wife's suggestion because, among other things,
           | he wants to see some great looking babes. Maybe this isn't a
           | guy whose philosophy I want to follow.
        
             | ryandrake wrote:
             | Looks like you're completely missing the point of the quote
             | and instead rat-holing on one word that you don't like. HN
             | in a nutshell.
        
         | OisinMoran wrote:
         | I love Vonnegut and this specific piece you link, but not sure
         | it's really talking about the same thing as the main link.
        
       | Isamu wrote:
       | >After I cracked the trick of tillering
       | 
       | Guide to Bow Tillering:
       | 
       | https://straightgrainedboard.com/beginners-guide-on-bow-till...
        
       | red_admiral wrote:
       | The story of playing at damming the creek or on the sand at the
       | seaside is wholesome and brought a smile to my face. Cracking the
       | "puzzle" is almost the bad ending of the game, if you don't get
       | any fun at playing it anymore.
       | 
       | People should spend more of their time doing things because
       | they're fun, not because they want to get better at it.
       | 
       | Maybe the apocalypse will happen in our lifetime, maybe not. I
       | intend to have fun as much as I can in my life either way.
        
       | bogdanoff_2 wrote:
       | The solution to this problem is to choose a "game" that you 100%
       | believe will positively impact the world.
        
       | ziofill wrote:
       | This is a tangent, but I would love so much to be able to give my
       | kids memories of playing in a creek in the backyard...
        
       ___________________________________________________________________
       (page generated 2025-04-11 23:01 UTC)