[HN Gopher] Thoughts on thinking
       ___________________________________________________________________
        
       Thoughts on thinking
        
       Author : bradgessler
       Score  : 209 points
       Date   : 2025-05-16 19:09 UTC (3 hours ago)
        
 (HTM) web link (dcurt.is)
 (TXT) w3m dump (dcurt.is)
        
       | bradgessler wrote:
       | I keep going back and forth on this feeling, but lately I find
       | myself thinking "F it, I'm going to do what I'm going to do that
       | interests me".
       | 
       | Today I'm working on doing the unthinkable in an AI-world:
       | putting together a video course that teaches developers how to
       | use Phlex components in Rails projects and selling it for a few
       | hundred bucks.
       | 
       | One way of thinking about AI is that it puts so much new
       | information in front of people that they're going to need help
       | from people known to have experience to navigate it all and
       | curate it. Maybe that will become more valuable?
       | 
       | Who knows. That's the worst part at this moment in time--nobody
       | really knows the depths or limits of it all. We'll see
       | breakthroughs in some areas, and others not.
        
       | don_neufeld wrote:
       | Completely agree.
       | 
       | From all of my observations, the impact of LLMs on human thought
       | quality appears largely corrosive.
       | 
       | I'm very glad my kid's school has _hardcore_ banned them. In some
       | class they only allow students to turn in work that was done in
       | class, under the direct observation of the teacher. There has
       | also been a significant increase in "on paper" work vs work done
       | on computer.
       | 
       | Lest you wonder "what does this guy know anyways?", I'll share
       | that I grew up in a household where both parents were professors
       | of education.
       | 
       | Understanding the effectiveness of different methods of learning
       | (my dad literally taught Science Methods) were a frequent topic.
       | Active learning (creating things using what you're learning
       | about) is so much more effective than passive, reception oriented
       | methods. I think LLMs largely are supporting the latter.
        
         | zdragnar wrote:
         | Anyone who has learned a second language can tell you that you
         | aren't proficient just by memorizing vocabulary and grammar.
         | Having a conversation and forming sentences on the fly just
         | feels different- either as a different skill or using a
         | different part of the brain.
         | 
         | I also don't think the nature of LLMs being a negative crutch
         | is new knowledge per se; when I was in school, calculus class
         | required a graphing calculator but the higher end models (TI-92
         | etc) that had symbolic equation solvers were also banned, for
         | exactly the same reason. Having something that can give an
         | answer for you fundamentally undermines the value of the
         | exercise in the first place, and cripples your growth while you
         | use it.
        
           | skydhash wrote:
           | Same with drawing which is easy to teach, but hard to master
           | because of the coordination between eyes and hand. You can
           | trace a photograph, but that just bypass the whole point and
           | you don't exercise any of the knowledge.
        
           | flysand7 wrote:
           | Another case in point is that memorizing vocabularies and
           | grammar, although could seem like an efficient way to learn a
           | language, is incredibly unrewarding. I've been learning
           | japanese from scratch, using only real speech to absorb new
           | words, without using dictionaries and anything else much. The
           | first feeling of reward came immediately when I learned that
           | "arigatou" means thanks (although I terribly misheard how the
           | word sounded, but hey, at least I heard it). Then after 6
           | month, when I could catch and understand some simple phrases.
           | After 6-7 years I can understand about 80% of any given
           | speech, which is still far, but I gotta say it was a good
           | experience.
           | 
           | With LLM's giving you ready-made answers I feel like it's the
           | same. It's not as rewarding because you haven't obtained the
           | answer yourself. Although it did feel rewarding when I was
           | interrogating an LLM about how CSRF works and it said I asked
           | a great question when I asked whether it only applies to
           | forms because it seems like fetch has a different kind of
           | browser protection.
        
         | avaika wrote:
         | This reminds me how back in my school days I was not allowed to
         | use the internet to prepare research on some random topics (e
         | g. history essay). It was the late 90s when the internet
         | started to spread. Anyway teachers forced us to use offline
         | libraries only.
         | 
         | Later in the university I was studying engineering. And we were
         | forced to prepare all the technical drawings manually in the
         | first year of study. Like literally with pencil and ruler. Even
         | though computer graphics were widely used and we're de facto
         | standard.
         | 
         | Personally I don't believe hardcore ban will help with any sort
         | of thing. It won't stop the progress either. It's much better
         | to help people learn how to use things instead of forcing them
         | to deal with "old school" stuff only.
        
           | baxtr wrote:
           | Yes agreed.
           | 
           | Let's not teach people on mass how to grow potatoes just
           | because we don't want them to eat fries.
        
         | johnisgood wrote:
         | You can learn a lot from LLMs though, same with, say,
         | Wikipedia. You need curiosity. You need the desire to learn. If
         | you do not have it, then of course you will get nowhere, LLMs
         | or no LLMs.
        
           | azinman2 wrote:
           | Never underestimate laziness, or willingness to take
           | something 80% as good for 1% of the work.
           | 
           | So most are not curious. So what do you do for them?
        
             | johnisgood wrote:
             | [delayed]
        
           | snackernews wrote:
           | Can you learn a lot? Or do you get instant answers to every
           | question without learning anything, as OP suggests?
        
             | johnisgood wrote:
             | [delayed]
        
       | taylorallred wrote:
       | Among the many ways that AI causes me existential angst, you've
       | reminded me of another one. That is, the fact that AI pushes you
       | towards the most average thoughts. It makes sense, given the
       | technology. This scares me because creative thought happens at
       | the very edge. When you get stuck on a problem, like you
       | mentioned, you're on the cusp of something novel that will at the
       | very least grow you as a person. The temptation to use AI could
       | rob you of the novelty in favor what has already been done.
        
         | worldsayshi wrote:
         | I feel there's a glaring counter point to this. I have never
         | felt more compelled to try out whatever coding idea that pops
         | into my head. I can make Claude write a poc in seconds to make
         | the idea more concrete. And I can write into a good enough tool
         | in a few afternoons. Before this all those ideas would just
         | never materialize.
         | 
         | I mean I get the existential angst though. There's a lot of
         | uncertainty about where all this is heading. But, and this is
         | really a tangent, I feel that the direction of it all is in the
         | intersection between politics, technology and human nature. I
         | feel like "we the people" leave walkover to powerful actors if
         | we do not use these new powerful tools in service of the
         | people. For one - to enable new ways to coordinate and
         | organise.
        
       | kdamica wrote:
       | Wow, one day after this post: https://blog.ayjay.org/two-
       | quotations-on-the-brief-dream-of-...
        
       | paulorlando wrote:
       | I've noticed something like this as well. A suggestion is to
       | write/build for no one but yourself. Really no one but yourself.
       | 
       | Some of my best writing came during the time that I didn't try to
       | publicize the content. I didn't even put my name on it. But doing
       | that and staying interested enough to spend the hours to think
       | and write and build takes a strange discipline. Easy for me to
       | say as I don't know that I've had it myself.
       | 
       | Another way to think about it: Does AI turn you into Garry
       | Kasparov (who kept playing chess as AI beat him) or Lee Sedol
       | (who, at least for now, has retired from Go)?
       | 
       | If there's no way through this time, I'll just have to
       | occasionally smooth out the crinkled digital copies of my past
       | thoughts and sigh wistfully. But I don't think it's the end.
        
         | ge96 wrote:
         | Yeah there is the personal passion and then the points/likes
         | driven which sucks the joy out
         | 
         | I experienced this when I was younger with my rc planes, I
         | joined some forum and I felt like everything I did had to be
         | posted/liked to have value. I'd post designs/fantasy and get
         | the likes then lose interest/not actually do it after I got the
         | ego bump
        
           | paulorlando wrote:
           | I think that's why some writers refuse to talk about their
           | work in progress. If they do, it saps the life out of it.
           | Your ego bump example kind of fits into that.
        
       | wjholden wrote:
       | I just wrote a paper a few days ago arguing that "manual
       | thinking" is going to become a rare and valuable skill in the
       | future. When you look around you, everyone is finding ways to be
       | better using AI, and they're all finding amazing successes - but
       | we're also unsure about the downsides. I hedge that my advantage
       | in ten years will be that I chose not to do what everyone else
       | did. I might regret it, we will see.
        
         | oytis wrote:
         | If AI is going to be as economically efficient as promised,
         | there is going to be no way to avoid using it altogether. So
         | the trick will be to keep your thinking skills functional,
         | while still using AI for speedup. Like focus in the age of
         | Internet is a rare skill, but not using Internet is not an
         | option either.
        
           | fkfyshroglk wrote:
           | Not all processes are the same, though. I strongly suspect
           | any efficiency improvements will come in processes that
           | didn't require much "thinking" to begin with. I use it daily,
           | but mostly as a way to essentially type faster--I can read
           | much faster than I can type, so I mostly validate and correct
           | the autocomplete. All of my efforts to get it to produce
           | trustworthy output beyond this seem to trail behind the
           | efficiency of just searching the internet.
           | 
           | Granted, I'm blessed to not have much busywork; if I need to
           | produce corporate docs or listicles AI would be a massive
           | boon. But I also suspect AI will be used to digest these
           | things back into small bullet points.
        
       | MarcelOlsz wrote:
       | Have not resonated with an article for a long time until this
       | one.
        
       | paintboard3 wrote:
       | I've been finding a lot of fulfillment in using AI to assist with
       | things that are (for now) outside of the scope of one-shot AI.
       | For example, when working on projects that require physical
       | assembly or hands-on work, AI feels more like a superpower than a
       | crutch, and it enables me to tackle projects that I wouldn't have
       | touched otherwise. In my case, this was applied to physical
       | building, electronics, and multimedia projects that rely on
       | simple code that are outside of my domain of expertise.
       | 
       | The core takeaway for me is that if you have the desire to
       | stretch your scope as wide as possible, you can get things done
       | in a fun way with reduced friction, and still feel like your
       | physical being is what made the project happen. Often this means
       | doing something that is either multidisciplinary or outside of
       | the scope of just being behind a computer screen, which isn't
       | everyone's desire and that's okay, too.
        
         | sanderjd wrote:
         | Yeah I haven't found the right language for this yet, but it's
         | something like: I'm happy and optimistic about LLMs when I'm
         | the one _doing_ something, and more anxious about them when I
         | 'm _supporting_ someone else in doing something. Or: It makes
         | me more excited to focus on _ends_ , and less excited to focus
         | on _means_.
         | 
         | Like, in the recent past, someone who wanted to achieve some
         | goal with software would either need to learn a bunch of stuff
         | about software development, or would need to hire someone like
         | me to bring their idea to life. But now, they can get a lot
         | further on their own, with the support of these new tools.
         | 
         | I think that's _good_ , but it's also nerve-wracking from an
         | employment perspective. But my ultimate conclusion is that I
         | want to work closer to the ends rather than the means.
        
           | apsurd wrote:
           | Interesting, I just replied to this post recommending the
           | exact opposite: to focus on means vs ends.
           | 
           | The post laments how everything is useless when any
           | conceivable "end state" a human can do will be inferior to
           | what LLMs can do.
           | 
           | So an honest attention toward the means of how something
           | comes about--the process of the thinking vs the polished
           | great thought--is what life is made of.
           | 
           | Another comment talks about hand-made bread. People do it and
           | enjoy it even though "making bread is a solved problem".
        
             | sanderjd wrote:
             | I saw that and thought it was an interesting dichotomy.
             | 
             | I think a way to square the circle is to recognize that
             | people have different goals at different times. As a person
             | with a family who is not independently wealthy, I care a
             | lot about being economically productive. But I also
             | separately care about the joy of creation.
             | 
             | If my goal in making a loaf of bread is economic
             | productivity, I will be happy if I have a robot available
             | that helps me do that quickly. But if my goal is to find
             | joy in the act of creation, I will not use that robot
             | because it would not achieve that goal.
             | 
             | I do still find joy in the act of creating software, but
             | that was already dwindling long before chatgpt launched,
             | and mostly what I'm doing with computers is with the goal
             | of economic productivity.
             | 
             | But yeah I'll probably still create software just for the
             | joy of it from time to time in the future, and I'm unlikely
             | to use AIs for those projects!
             | 
             | But at work, I'm gonna be directing my efforts toward
             | taking advantage of the tools available to create useful
             | things efficiently.
        
               | apsurd wrote:
               | ooh I like this take. We can change the framing. In the
               | frame of one's livelihood we need to be concerned with
               | economic productively, philosophy be damned.
        
       | WillAdams wrote:
       | This is only a problem if one is writing/thinking on things which
       | have already been written about without creating a new/novel
       | approach in one's writing.
       | 
       | An AI is _not_ going to get awarded a PhD, since by definition,
       | such are earned by extending the boundaries of human knowledge:
       | 
       | https://matt.might.net/articles/phd-school-in-pictures/
       | 
       | So rather than accept that an LLM has been trained on whatever it
       | is you wish to write, write something which it will need to be
       | trained on.
        
         | noiv wrote:
         | I agree, frustration steps in earlier than before because AIs
         | tell you very eagerly that unique thought is already part of
         | its training data. Sometimes I wish one could put an AI on
         | drugs and filter out some hallucinations that'll become main
         | stream next week.
        
       | jonplackett wrote:
       | I'm so so glad at the end it said "Written entirely by a human"
       | because all the way through I was thinking - absolutely no way
       | does AI come up with great ideas or insights and it definitely
       | would not write this article- it holds together and flows too
       | well (if this turns out to be the joke I'll admit we're all now
       | fucked)
       | 
       | Having said that I am very worried about kids growing up with AI
       | and it stunting their critical thinking before it begins - but as
       | of right this moment AI is extremely sub par at genuinely good
       | ideas or writing.
       | 
       | It's an amazing and useful tool I use all the time though and
       | would struggle to be without.
        
       | tutanosh wrote:
       | I used to feel the same way about AI, but my perspective has
       | completely changed.
       | 
       | The key is to treat AI as a tool, not as a magic wand that will
       | do everything for you.
       | 
       | Even if AI could handle every task, leaning on it that way would
       | mean surrendering control of your own life--and that's never
       | healthy.
       | 
       | What works for me is keeping responsibility for the big picture--
       | what I want to achieve and how all the pieces fit together--while
       | using AI for well-defined tasks. That way I stay fully in
       | control, and it's a lot more fun this way too.
        
       | bachittle wrote:
       | The article mentions that spell and grammar checking AI was used
       | to help form the article. I think there is a spectrum here, with
       | spell and grammar checking on one end, and the fears the article
       | mentions on the other end (AI replacing our necessity to think).
       | If we had a dial to manually adjust what AI works on, this may
       | help solve the problems mentioned here. The issue is that all the
       | AI companies are trying too hard to achieve AGI, and thus making
       | the interfaces general and without controls like this.
        
       | perplex wrote:
       | I don't think LLMs replace thinking, but rather elevate it. When
       | I use an LLM, I'm still doing the intellectual work, but I'm
       | freed from the mechanics of writing. It's similar to programming
       | in C instead of assembly: I'm operating at a higher level of
       | abstraction, focusing more on what I want to say than how to say
       | it.
        
         | cratermoon wrote:
         | The writing _is_ the work, though. The words on paper (or
         | wherever) are the end product, but they are not the point. See
         | chapter 5 of Ahrens, Sonke. 2017. _How to take smart notes: one
         | simple technique to boost writing, learning and thinking - for
         | students, academics and nonfiction book writers._ , for advice
         | on how writing the ideas in your own words is the primary task
         | and improves not only writing, but all intellectual skills,
         | including reading and thinking. C. Wright Mills in his 1952
         | essay, ""On Intellectual Craftsmanship" says much the same
         | thing. Stating the ideas _in your own words_ is thinking.
        
         | i1856511 wrote:
         | If you do not know how to say something, you don't know how to
         | say it.
        
         | FeteCommuniste wrote:
         | When I microwave a frozen meal for dinner, I'm still a chef,
         | but I'm freed from the mechanics of preparing and assembling
         | ingredients to form a dish.
        
       | jebarker wrote:
       | > nothing I make organically can compete with what AI already
       | produces--or soon will.
       | 
       | No LLM can ever express your unique human experience (or even
       | speak from experience), so on that axis of competition you win by
       | default.
       | 
       | Regurgitating facts and the mean opinion on topics is no
       | replacement for the thoughts of a unique human. The idea that
       | you're competing with AI on some absolute scale of the quality of
       | your thought is a sad way to live.
        
         | steamrolled wrote:
         | More generally, prior to LLMs, you were competing with 8
         | billion people alive (plus all of our notable dead). Any novel
         | you could write probably had some precedent. Any personal story
         | you could tell probably happened to someone else too. Any skill
         | you wanted to develop, there probably was another person more
         | capable of doing the same.
         | 
         | It was never a useful metric to begin with. If your life goal
         | is to be #1 on the planet, the odds are not in your favor. And
         | if you get there, it's almost certainly going to be
         | unfulfilling. Who is the #1 Java programmer in the world? The
         | #1 topologist? Do they get a lot of recognition and love?
        
           | harrison_clarke wrote:
           | a fun thing about having a high-dimensional fitness function
           | is that it's pretty easy to not be strictly worse than anyone
        
       | martin-t wrote:
       | This resonates with me deeply.
       | 
       | I used to write open source a lot but lately, I don't see the
       | point. Not because I think LLMs can produce novel code as good
       | code as me or will be able to in the near future. But because any
       | time I come up with a new solution to something, it will be
       | stolen and used without my permission, without giving me credit
       | or without giving users the rights I give them. And it will be
       | mangled just enough that I can't prove anything.
       | 
       | Large corporations were so anal about copyright that people who
       | ever saw Microsoft's code were forbidden from contributing to
       | FOSS alternatives like wine. But only as long as copyright suited
       | them. Now abolishing copyright promises the C-suite even bigger
       | rewards by getting rid of those pesky expensive programmers, if
       | only they could just steal enough code to mix and match it with
       | enough plausible deniability.
       | 
       | And so even though _in principle_ anybody using my AGPL code or
       | anything that incorporates my AGPL code has the right to inspect
       | and modify said code; yet now tine fractions of my AGPL code now
       | have millions or potentially billions of users but nobody knows
       | and nobody has the right to do anything about it.
       | 
       | And those who benefit the most are those who already have more
       | money than they can spend.
        
       | Trasmatta wrote:
       | Agreed. I keep seeing posts where people claim the output is all
       | that really matters (particularly with code), and I think that's
       | missing something deeply fundamental about being human.
        
       | boznz wrote:
       | Think Bigger. As the LLM's capability expands use it to push
       | yourself outside your comfort zone every now and then. I have
       | done some great fun projects recently I would have never thought
       | of tackling before
        
       | cess11 wrote:
       | This person should probably read pre-Internet books, discover or
       | rediscover that the bar for passable expression in text is very
       | low compared to what it was.
       | 
       | Most of that 'corpus' isn't even on the Internet so it is wholly
       | unknown to our "AI" masters.
        
       | tibbar wrote:
       | AI is far better in style than in substance. That is, an AI-
       | written solution will have all the trappings of expertise and
       | deep thought, but frankly is usually at best mediocre. It's sort
       | of like when you hear someone make a very eloquent point and your
       | instinct is to think "wow, that's the final word on the subject,
       | then!" ... and then someone who _actually understands_ the
       | subject points out the flaw in the elegant argument, and it falls
       | down like a house of cards. So, at least for now, don 't be
       | fooled! Develop expertise, be the person who really understands
       | stuff.
        
       | uludag wrote:
       | What happens to conversation in this case? When groups of people
       | are trained to use LLMs as a crutch for thinking, what happens
       | when people get together and need to discuss something. I feel
       | like the averageness of the thinking would get compounded so that
       | the conversation as a whole becomes nothing but a staging ground
       | for a prompt. Would an hour long conversation about the
       | intricacies of a system architecture become a ten minute chatter
       | of what prompts people would want to ask? Does everyone then
       | channel their LLM to the rest of the group? In the end, would the
       | most probable response to which all LLMs being used agree with be
       | the result of the conversation?
        
         | FeteCommuniste wrote:
         | Eventually the LLMs get plugged directly into our brains and do
         | all our thinking and lip movements for us. We can all be Sam
         | Altman's meatpuppets.
        
       | mlboss wrote:
       | What we need is mental gyms. In modern society there is no need
       | for physical labor but we go to gyms just to keep ourselves
       | healthy.
       | 
       | Similarly in future we will not need mental "labor" but to keep
       | ourselves sharp we need engage in mental exercises. I am thinking
       | of picking up chess again just for this reason.
        
         | incognito124 wrote:
         | IMO chess is not the best mental gym. Personally, I've started
         | exercising mental arithmetics
        
       | spinach wrote:
       | I've had the same experience but with drawing. What's the point
       | when AI can generate perfect finished pieces in seconds? Why put
       | all that effort in to learning and drawing something. It's always
       | been hard for me but it used to feel worth it for the finished
       | piece, but now you can bring a piece of computer art into being
       | with a simple word prompt.
       | 
       | I still create, I just use physical materials like clay and such,
       | to make things that AI can't yet replicate.
        
         | add-sub-mul-div wrote:
         | AI won't create something perfect or even better. And to the
         | extent we enjoy creating for its own sake that's still there.
         | But it's true that real creators will have a harder time being
         | seen by others when there's an ocean of slop being shoved down
         | our throats by parties with endless budgets.
        
       | regurgist wrote:
       | You are in a maze of twisty little passages that are not the
       | same. You have to figure out which ones are not. Try this. The
       | world is infinitely complex. AI is very good at dealing with the
       | things it knows and can know. It can write more accurately than I
       | can, spell better. It just takes stuff I do and learns from my
       | mistakes. I'm good with that. But here is something to ask AI:
       | 
       | Name three things you cannot think about because of the language
       | you use?
       | 
       | Or "why do people cook curds when making cheese."
       | 
       | Or how about this:
       | 
       | "Name three things you cannot think about because of the language
       | you use?"
       | 
       | AI is at least to some extent a artificial regurgitarian. It can
       | tell you about things that have been thought. Cool. But here is a
       | question for you. Are there things that you can think about that
       | have not been thought about before?
       | 
       | The reason people cook curds is because the goal of cheese making
       | was to preserve milk, not to make cheese.
        
       | curl-up wrote:
       | > The fun has been sucked out of the process of creation because
       | nothing I make organically can compete with what AI already
       | produces--or soon will.
       | 
       | So the fun, all along, was not in the process of creation itself,
       | but in the fact that the creator could somehow feel superior to
       | others not being able to create? I find this to be a very
       | unhealthy relationship to creativity.
       | 
       | My mixer can mix dough better than I can, but I still enjoy
       | kneading it by hand. The incredibly good artisanal bakery down
       | the street did not reduce my enjoyment of baking, even though I
       | cannot compete with them in quality by any measure. Modern slip
       | casting can make superior pottery by many different quality
       | measures, but potters enjoy throwing it on a wheel and producing
       | unique pieces.
       | 
       | But if your idea of fun is tied to the "no one else can do this
       | but me", then you've been doing it wrong before AI existed.
        
         | patcon wrote:
         | Yeah, I think you're onto something. I'm not sure the
         | performative motivation is necessarily bad, but def different
         | 
         | Maybe AI is like Covid, where it will reveal that there were
         | subtle differences in the underlying humans all along, but we
         | just never realized it until something shattered the ability
         | for ambiguity to persist.
         | 
         | I'm inclined to so that this is a destabilising thing,
         | regardless of my thoughts on the "right" way to think about
         | creativity. Multiple ways could coexist before, and now one way
         | no longer "works".
        
         | quantumgarbage wrote:
         | I think you are way past the argument the writer is making.
        
         | ebiester wrote:
         | Let's frame it more generously: The reward is based on being
         | able to contribute something novel to the world - not because
         | nobody else can but because it's another contribution to the
         | world's knowledge.
        
           | curl-up wrote:
           | If the core idea that was intended to be broadcasted to the
           | world was a "contribution", and LLM simply expanded on it,
           | then I would view LLMs simply a component in that
           | broadcasting operation (just as the internet infrastructure
           | would be), and the author's contribution would still be
           | intact, and so should his enjoyment.
           | 
           | But his argument does not align with that. His argument is
           | that he enjoys the act of writing itself. If he views his act
           | of writing (regardless of the idea being transmitted) as his
           | "contribution to world's knowledge", then I have to say I
           | disagree - I don't think his writing is particularly
           | interesting in and of itself. His ideas might be interesting
           | (even if I disagree), but he obviously doesn't find the
           | formation of ideas enjoyable enough.
        
           | Viliam1234 wrote:
           | Now you can contribute something novel to the world by
           | pressing a button. Sounds like an improvement.
        
             | drdeca wrote:
             | If one merely presses a button (the same button, not
             | choosing what button to push based on context), I don't see
             | what it is that one has contributed? One of those tippy
             | bird toys can press a button.
        
           | lo_zamoyski wrote:
           | The primary motivation should be wisdom. No one can become
           | wise for you. You don't become any wiser yourself that way.
           | And a machine isn't even capable of being wise.
           | 
           | So while AI might remove the need for human beings to engage
           | in certain practical activities, it cannot eliminate the
           | theoretical, because by definition, theory is done for its
           | own sake, to benefit the person theorizing by leading them to
           | understanding something about the world. AI can perhaps find
           | a beneficial place here in the way books or teachers do, as
           | guides. But in all these cases, you absolutely need to engage
           | with the subject matter yourself to profit from it.
        
           | mionhe wrote:
           | It sounds as if the reward is primarily monetary in this
           | case.
           | 
           | As some others have commented, you can find rewards that
           | aren't monetary to motivate you, and you can find ways to
           | make your work so unique that people are willing to pay for
           | it.
           | 
           | Technology forces us to use the creative process to more
           | creatively monetize our work.
        
         | movpasd wrote:
         | Sometimes the fun is in creating something useful, as a human,
         | for humans. We want to feel useful to our tribe.
        
         | Tuperoir wrote:
         | Good point.
         | 
         | Self-actualisation should be about doing the things that only
         | you can. Not better than anyone else, but more like, the
         | specific things that ony you, with the same of your experience,
         | expertise, values and constraints can do.
        
         | StefanBatory wrote:
         | Knowing that what I do anyone can do, no matter how well I'll
         | do it, is discouraging. Because then, what is my purpose? What
         | can I say that I'm good at?
        
           | curl-up wrote:
           | Would you say that the chess players became "purposeless"
           | with Deep Blue, or Go players with Alpha Go?
        
             | rfw300 wrote:
             | It's interesting that you name those examples, because Lee
             | Sedol, the all-time great Go player, retired shortly after
             | losing to Alpha Go, saying: "Even if I become the number
             | one, there is an entity that cannot be defeated... losing
             | to AI, in a sense, meant my entire world was collapsing...
             | I could no longer enjoy the game. So I retired." [1, 2]
             | 
             | So for some, yes. It is of course also true that many
             | people derive self-worth and fulfillment from contributing
             | positively to the world, and AI automating the productive
             | work in which they specialize can undermine that.
             | 
             | [1] https://en.yna.co.kr/view/AEN20191127004800315
             | 
             | [2] https://www.nytimes.com/2024/07/10/world/asia/lee-
             | saedol-go-...
        
               | curl-up wrote:
               | I am in no way disputing that some people would feel that
               | way because of AI, just as some performing classical
               | musicians felt that way in the advent of the audio
               | recorder.
               | 
               | What I am saying is that (1) I regard this as an
               | unhealthy relationship to creativity (and I accept that
               | this is subjective), and (2) that most people do not feel
               | that way, as can be confirmed by the fact that chess, go,
               | and live music performances are all still very much
               | practiced.
        
           | aprdm wrote:
           | That's a deeper question that only you can answer. I can only
           | say that your thinking based on how you phrased it doesn't
           | really lead to happiness in general
        
         | yapyap wrote:
         | I mean yeah apparently so for the OP but I'm sure he did not
         | mean for it to be that way intentionally
        
         | jsemrau wrote:
         | "The fun has been sucked out of the process of preparing food
         | because nothing I make organically can compete with what
         | restaurants/supermarkets already produces--or soon will."
        
         | getpokedagain wrote:
         | I don't think it's solely the rub in others faces that they
         | can't create. You hope they learn to as well.
        
         | kelseyfrog wrote:
         | > So the fun, all along, was not in the process of creation
         | itself, but in the fact that the creator could somehow feel
         | superior to others not being able to create? I find this to be
         | a very unhealthy relationship to creativity.
         | 
         | People realize this at various points in their life, and some
         | not at all.
         | 
         | In terms the author might accept, the metaphor of the stoic
         | archer comes to mind. Focusing on the action, not the target is
         | what relieves one of the disappointment of outcome. In this
         | cast, the action is writing while the target is having better
         | thoughts.
         | 
         | Much of our life is governed by the success at which we hit our
         | targets, but why do that to oneself? We have a choice in how we
         | approach the world, and setting our intentions toward action
         | and away from targets is a subtle yet profound shift.
         | 
         | A clearer example might be someone who wants to make a friend.
         | Let's imagine they're at a party and they go in with the
         | intention of making a friend, they're setting themselves up for
         | failure. They have relatively little control over that outcome.
         | However, if they go in with the intention of showing up
         | authentically - something people tend to appreciate, and
         | something they have full control over - the changes of them
         | succeeding increase dramatically.
         | 
         | Choosing one's goals - primarily grounded in action - is an
         | under-appreciated perspective.
        
         | garrettj wrote:
         | Yeah, there's something this person needs to embrace about the
         | process rather than being some kind of modern John Henry,
         | comparing themselves to a machine. There's still value in the
         | things a person creates despite what AI can derive from its
         | training model of Reddit comments. Find peace in the process of
         | making and you'll continue to love it.
        
         | nthingtohide wrote:
         | We need to start taking a leaf of advice from spiritual
         | knowledge that "You are not the doer." You were never the doer.
         | The doing happened on its own. You were merely a vessel, an
         | instrument. A Witness. Observe your inner mechanisms of mind,
         | and you will quickly come to this realisation.
        
         | wcfrobert wrote:
         | I think the article is getting at the fact that in a post-AGI
         | world, human skill is a depreciating asset. This is terrifying
         | because we exchange our physical and mental labor for money.
         | Consider this: why would a company hire me if - with enough GPU
         | and capital - they can copy-and-paste 1,000 of AI agents much
         | smarter to do the work?
         | 
         | With AGI, Knowledge workers will be worth less until they are
         | worthless.
         | 
         | While I'm genuinely excited about the scientific progress AGI
         | will bring (e.g. curing all diseases), I really hope there's a
         | place for me in the post-AGI world. Otherwise, like the potters
         | and bakers who can't compete in the market with cold-hard
         | industrial machines, I'll be selling my python code base on
         | Etsy.
         | 
         | No Set Gauge had an excellent blog post about this. Have a read
         | if you want a dash of existential dread for the weekend:
         | https://www.nosetgauge.com/p/capital-agi-and-human-ambition.
        
           | senordevnyc wrote:
           | This is only terrifying because of how we've structured
           | society. There's a version of the trajectory we're on that
           | leads to a post-scarcity society. I'm not sure we can pull
           | that off as a species, but even if we can, it's going to be a
           | bumpy road.
        
             | GuinansEyebrows wrote:
             | the barrier to that version of the trajectory is that "we"
             | haven't structured society. what structure exists, exists
             | as a result of capital extracting as much wealth from labor
             | as labor will allow (often by dividing class interests
             | among labor).
             | 
             | agreed on the bumpy road - i don't see how we'll reach a
             | post-scarcity society unless there is an intentional
             | restructuring (which, many people think, would require a
             | pretty violent paradigm shift).
        
           | 9dev wrote:
           | That seems like a very narrow perspective. For one, it is
           | neither clear we will end up with AGI at all--we could have
           | reached or soon reach a plateau with the possibilities of the
           | LLM technology--or whether it'll work like what you're
           | describing; the energy requirements might not be feasible,
           | for example, or usage is so expensive it's just not worth
           | applying it to every mundane task under the sun, like writing
           | CRUD apps in Python. We know how to build flying cars,
           | technically, but it's just not economically sustainable to
           | use them. And finally, you never know what niches are going
           | to be freed up or created by the ominous AGI machines
           | appearing on the stage.
           | 
           | I wouldn't worry too much yet.
        
           | Animats wrote:
           | > With AGI, Knowledge workers will be worth less until they
           | are worthless.
           | 
           | "Knowledge workers" being in charge is a recent idea that is,
           | perhaps, reaching end of life. Up until WWII or so, society
           | had more smart people than it had roles for them. For most of
           | history, being strong and healthy, with a good voice and a
           | strong personality, counted for more than being smart. To a
           | considerable extent, it still does.
           | 
           | In the 1950s, C.P. Snow's "Two Cultures" became famous for
           | pointing out that the smart people were on the way up.[1]
           | They hadn't won yet; that was about two decades ahead. The
           | triumph of the nerds took until the early 1990s.[2] The
           | ultimate victory was, perhaps, the collapse of the Soviet
           | Union in 1991. That was the last major power run by goons.
           | That's celebrated in The End of History and the Last Man
           | (1992).[3] Everything was going to be run by technocrats and
           | experts from now on.
           | 
           | But it didn't last. Government by goons is back. Don't need
           | to elaborate on that.
           | 
           | The glut of smart people will continue to grow. Over half of
           | Americans with college educations work in jobs that don't
           | require a college education. AI will accelerate that process.
           | It doesn't require AI superintelligence to return smart
           | people to the rabble. Just AI somewhat above the human
           | average.
           | 
           | [1] https://en.wikipedia.org/wiki/The_Two_Cultures
           | 
           | [2] https://archive.org/details/triumph_of_the_nerds
           | 
           | [3] https://en.wikipedia.org/wiki/The_End_of_History_and_the_
           | Las...
        
         | gibbitz wrote:
         | I think the point is that part of the value of a work of art to
         | this point is the effort or lack of effort involved in its
         | creation. Evidence of effort has traditionally been a sign of
         | the quality of thought put into a work as a product of time
         | spent in its creation. LLMs short circuit this instinct in
         | evaluation making some think works generated by AI are better
         | than they are while simultaneously making those who create work
         | see it as devaluation of work (which is the demotivator here).
         | 
         | I'm curious why so any people see creators and intellectuals as
         | competitive people trying to prove they're better than someone
         | else. This isn't why people are driven to seek knowledge or
         | create Art. I'm sure everyone has their reasons for this, but
         | it feels like insecurity from the outside.
         | 
         | Looking at debates about AI and Art outside of IP often brings
         | out a lot of misunderstandings about what makes good Art and
         | why Art is a thing man has been compelled to make since the
         | beginning of the species. It takes a lifetime to select
         | techniques and thought patterns that define a unique and
         | authentic voice. A lifetime of working hard on creating things
         | adds up to that voice. When you start to believe that work is
         | in vain because the audience doesn't know the difference it
         | certainly doesn't make it feel rewarding to do.
        
       | woah wrote:
       | > But now, when my brain spontaneously forms a tiny sliver of a
       | potentially interesting concept or idea, I can just shove a few
       | sloppy words into a prompt and almost instantly get a fully
       | reasoned, researched, and completed thought. Minimal organic
       | thinking required.
       | 
       | No offense, but I've found that AI outputs very polished but very
       | average work. If I am working on something more original, it is
       | hard to get AI to output reasoning about it without heavy
       | explanation and guidance. And even then, it will "revert to the
       | mean" and stumble back into a rut of familiar concepts after a
       | few prompts. Guiding it back onto the original idea repeatedly
       | quickly uses up context.
       | 
       | If an AI is able to take a sliver of an idea and output something
       | very polished from it, then it probably wasn't that original in
       | the first place.
        
       | apsurd wrote:
       | If you are in the camp of "ends justify the means" then it makes
       | sense why all this is scary and nihilistic. What's the point
       | doing anything if the outcome is infinitely better done in some
       | other way by some other thing/being/agent?
       | 
       | If the "means justify the ends" then doing anything is its own
       | reason.
       | 
       | And in the _end_, the cards will land where they may. ends-
       | justify-means is really logical and alluring, until I realize why
       | am I optimizing for END?
        
       | ankit219 wrote:
       | Familiar scenario. One thing that would help is to use AI as
       | someone you are having a conversation with.
       | 
       | (most AIs need to be explicitly told before you start this). You
       | tell them not to agree with you, to ask more questions instead of
       | providing the answers, to offer justifications and background as
       | to why those questions are being asked. This helps you refine
       | your ideas more, understand the blind spots, and explore
       | different perspectives. Yes, an LLM can refine the idea for you,
       | especially if something like that is already explored. It can
       | also be the brainstorming accessory who helps you to think
       | harder. Come up with new ideas. The key is to be intentional
       | about which way you want it. I once made Claude roleplay as a
       | busy exec who would not be interested in my offering until i
       | refined it 7 times (and it kept offering reasons as to why an AI
       | exec would or would not read it).
        
       | regurgist wrote:
       | You are in a maze of twisty little passages that are not the
       | same. You have to figure out which ones are not.
       | 
       | Try this. The world is infinitely complex. AI is very good at
       | dealing with the things it knows and can know. It can write more
       | accurately than I can, spell better. It just takes stuff I do and
       | learns from my mistakes. I'm good with that. But here is
       | something to ask AI:
       | 
       | "Name three things you cannot think about because of the language
       | you use?"
       | 
       | Or "why do people cook curds when making cheese."
       | 
       | Or how about this:
       | 
       | "Name three things you cannot think about because of the language
       | you use?"
       | 
       | AI is at least to some degree an artificial regurgitarian. It can
       | tell you about things that have been thought. Cool. But here is a
       | question for you. Are there things that you can think about that
       | have not been thought about before or that have been thought
       | about incorrectly?
       | 
       | The reason people cook curds is because the goal of cheese making
       | was (in the past) to preserve milk, not to make cheese.
        
         | dave1999x wrote:
         | I really don't understand your point. This in nonsense to me.
        
       | agotterer wrote:
       | I appreciate your thoughts and definitely share some of your
       | sentiments. But don't underestimate the value and importance of
       | continuing to be a creative.
       | 
       | Something I've been thinking about lately is the idea of creative
       | stagnation due to AI. If AI's creativity relies entirely on
       | training from existing writing, anrchitecture, art, music,
       | movies, etc., then future AI might end up being trained only on
       | derivatives of today's work. If we stop generating original ideas
       | or developing new styles of art, music, etc., how long before
       | society gets stuck endlessly recycling the same sounds, concepts,
       | and designs?
        
       | MinimalAction wrote:
       | > My thinking systems have atrophied, and I can feel it.
       | 
       | I do understand where the author is coming from. Most of the
       | times, it is easy to _read_ an answer---regardless of whether it
       | is right /wrong, relevant or not---than _think_ of an answer. So
       | AI does take that friction of thinking away.
       | 
       | But, I am still disappointed of all this doom because of AI. I am
       | inclined to throw my hands and say "just don't use it then". The
       | process of thinking is where the fun lies, not in showing the
       | world I am better or always right than so and so.
        
       | williamcotton wrote:
       | > The fun has been sucked out of the process of creation because
       | nothing I make organically can compete with what AI already
       | produces--or soon will.
       | 
       | I find immense joy and satisfaction when I write poetry. It's
       | like crafting a puzzle made of words and emotions. While I do
       | enjoy the output, if there is any goal it is to tap into and be
       | absorbed by the process itself.
       | 
       | Meanwhile, code? At least for me, and to speak nothing of those
       | that approach the craft differently, it is (almost) nothing but a
       | means to an ends! I do enjoy the little projects I work on. Hmm,
       | maybe for me software is about adding another tool to the belt
       | that will help with the ongoing journey. Who knows. It definitely
       | feels very different to outsource coding than to outsource my
       | artistic endeavors.
       | 
       | One thing that I know won't go away are the small pockets of
       | poetry readings, singer-songwriters, and other artistic
       | approaches that are decidedly more personal in both creation and
       | audience. There are engaged audiences for art and there are
       | passive consumers. I don't think this changes much with AI.
        
       | Centigonal wrote:
       | > But now, when my brain spontaneously forms a tiny sliver of a
       | potentially interesting concept or idea, I can just shove a few
       | sloppy words into a prompt and almost instantly get a fully
       | reasoned, researched, and completed thought.
       | 
       | I can't relate to this at all. The reason I write, debate, or
       | think at all is to find out what I believe and discover my voice.
       | Having an LLM write an essay based on one of my thoughts is about
       | as "me" as reading a thinkpiece that's tangentially related to
       | something I care about. I write because I want to get _my_
       | thoughts out onto the page, in _my_ voice.
       | 
       | I find LLMs useful for a lot of things, but using an LLM to
       | shortcut personal writing is antithetical to what I see as the
       | purpose of personal writing.
        
       | sippndipp wrote:
       | I understand the depression. I'm a developer (professional) and I
       | make music (ambitious hobby). Both arts heavily in a
       | transformational process.
       | 
       | I'd like to challenge a few things. I rarely have a moment where
       | an LLM provides me a creative spark. It's more that I don't
       | forget anything from the mediocre galaxy of thoughts.
       | 
       | See AI as a tool.
       | 
       | A tool that helps you to automate repetitive cognitive work.
        
       | hodder wrote:
       | "AI could probably have written this post far more quickly,
       | eloquently, and concisely. It's horrifying."
       | 
       | ChatGPT write that post more eloquently:
       | 
       | May 16, 2025 On Thinking
       | 
       | I've been stuck.
       | 
       | Every time I sit down to write a blog post, code a feature, or
       | start a project, I hit the same wall: in the age of AI, it all
       | feels pointless. It's unsettling. The joy of creation--the spark
       | that once came from building something original--feels dimmed, if
       | not extinguished. Because no matter what I make, AI can already
       | do it better. Or soon will.
       | 
       | What used to feel generative now feels futile. My thoughts seem
       | like rough drafts of ideas that an LLM could polish and complete
       | in seconds. And that's disorienting.
       | 
       | I used to write constantly. I'd jot down ideas, work them over
       | slowly, sculpting them into something worth sharing. I'd obsess
       | over clarity, structure, and precision. That process didn't just
       | create content--it created thinking. Because for me, writing has
       | always been how I think. The act itself forced rigor. It refined
       | my ideas, surfaced contradictions, and helped me arrive at
       | something resembling truth. Thinking is compounding. The more you
       | do it, the sharper it gets.
       | 
       | But now, when a thought sparks, I can just toss it into a prompt.
       | And instantly, I'm given a complete, reasoned, eloquent response.
       | No uncertainty. No mental work. No growth.
       | 
       | It feels like I'm thinking--but I'm not. The gears aren't
       | turning. And over time, I can feel the difference. My intuition
       | feels softer. My internal critic, quieter. My cleverness, duller.
       | 
       | I believed I was using AI in a healthy, productive way--a bicycle
       | for the mind, a tool to accelerate my intellectual progress. But
       | LLMs are deceptive. They simulate the journey, but they skip the
       | most important part. Developing a prompt feels like work. Reading
       | the output feels like progress. But it's not. It's passive
       | consumption dressed up as insight.
       | 
       | Real thinking is messy. It involves false starts, blind alleys,
       | and internal tension. It requires effort. Without that, you may
       | still reach a conclusion--but it won't be yours. And without
       | building the path yourself, you lose the cognitive infrastructure
       | needed for real understanding.
       | 
       | Ironically, I now know more than ever. But I feel dumber. AI
       | delivers polished thoughts, neatly packaged and persuasive. But
       | they aren't forged through struggle. And so, they don't really
       | belong to me.
       | 
       | AI feels like a superintelligence wired into my brain. But when I
       | look at how I explore ideas now, it doesn't feel like
       | augmentation. It feels like sedation.
       | 
       | Still, here I am--writing this myself. Thinking it through. And
       | maybe that matters. Maybe it's the only thing that does.
       | 
       | Even if an AI could have written this faster. Even if it could
       | have said it better. It didn't.
       | 
       | I did.
       | 
       | And that means something.
        
       | keernan wrote:
       | >>the process of creation >>my original thoughts >>I'd have ideas
       | 
       | As far as I can tell, LLMs are incapable of any of the above.
       | 
       | I'd love to hear from LLM experts how LLMs can ever have original
       | ideas using the current type of algorithms.
        
       | aweiher wrote:
       | Have you tried writing this article on .. AI? *scnr
        
       | bsimpson wrote:
       | I admittedly don't use AI as much as many others here, but I have
       | noticed that whenever I take the time to write a hopefully-
       | inciteful response in a place like Reddit, I immediately get
       | people trying to dunk on me by accusing me of being an AI bot.
       | 
       | It makes me not want to participate in those communities
       | (although to be honest, spending less time commenting online
       | would probably be good for me).
        
         | FeteCommuniste wrote:
         | It'a vicious cycle. As AI gets better at writing, more people
         | will rely on it to write for them, and as more AI content gains
         | prominence, more people will tend to ape its style even when
         | they do write something for themselves.
        
       | quantadev wrote:
       | There's quite a dichotomy going on in software development. With
       | AI, we can all create much more than we ever could before, and
       | make it much better and even in much less time, but what we've
       | lost is the sense of pride that comes with the act of
       | creating/coding, because nowadays:
       | 
       | 1) If you wrote most of it yourself then you failed to adequately
       | utilize AI Coding agents and yet...
       | 
       | 2) If AI wrote most of it, then there's not exactly that much of
       | a way to take pride in it.
       | 
       | So the new thing we can "take pride in" is our ability to
       | "control" the AI, and it's just not the same thing at all. So
       | we're all going to be "changing jobs" whether we like it or not,
       | because work will never be the same, regardless of whether you're
       | a coder, an artist, a writer, or an AD agency fluff writer. Then
       | again pride is a sin, so just GSD and stop thinking about
       | yourself. :)
        
         | hooverd wrote:
         | Maybe, but will anyone understand what they are creating?
        
           | quantadev wrote:
           | Right. In the near term, I'm predicting a dramatic decline in
           | software quality, because code was never understood and
           | tested properly. In the future we will have better processes,
           | and better ways of letting AI do it's own
           | checking/validation, but that's lagging right now.
        
             | hooverd wrote:
             | It'll be interesting to see a society where there's a big
             | negative incentive to actually sitting down and
             | understanding how things work, in the name of efficiency.
        
       | ThomPete wrote:
       | My way out of this was to start thinking about what can't the
       | LLMs of the world do and my realization was actually quite simple
       | and quite satisfying.
       | 
       | What LLMs can't replace is network effects. One LLM is good but
       | 10 LLMs/agents working together creating shared history is not
       | replaceable by any LLM no matter how smart it becomes.
       | 
       | So it's simple. Build something that benefit from network effects
       | and you will quickly find new ideas, at least it worked for me.
       | 
       | So now I am exploring ex. synthetic predictions markets via
       | https://www.getantelope.com or
       | 
       | Rethinking myspace but for agents instead like:
       | https://www.firstprinciple.co/misc/AlmostFamous.mp4
       | 
       | AI want's to be social :)
        
       | xivzgrev wrote:
       | in life and with people I think about a car, knowing when to go
       | and when to stop
       | 
       | some people are all go and no stop. we call them impulsive.
       | 
       | some people may LOOK all go but have wisdom (or luck) behind the
       | scenes putting the brakes on. Example: Tom Cruise does his own
       | stunts, and must have a good sense for how to make it safe enough
       | 
       | What this author touches on is a chief concern with AI. In the
       | name of removing life friction, it removes your brakes. Anything
       | you want to do, just ask AI!
       | 
       | But should you?
       | 
       | I was out the other day, pondering what the word "respect" really
       | means. It's more elusive than simply liking someone. Several
       | times I was tempted to just google it or ask AI, but then how
       | would I develop my own point of view? This kind of thing feels
       | important to have your own point of view on. And it's that we're
       | losing - the things we should think about in this life, we'll be
       | tempted to not anymore. And come out worse for it.
       | 
       | All go, no brakes
        
       | asim wrote:
       | True. Now compare that to the creation of the universe and feel
       | your insignificance of never being able to match it's creation in
       | any form. Try to create the code that creates blades of grass.
       | Good luck creating life. AI for all it's worth is a toy. One that
       | shows us our own limitations but replace the word AI with God and
       | understand that we spend a lot of time ruminating over things
       | that don't matter. It's an opinion, don't kill me over it. But
       | Dustin has reached an interesting point. What's the value of our
       | time and effort and where should we place it? If the tools we
       | create can do the things we used to do for "work" and "fun" then
       | we need to get back to doing what only humans were made for.
       | Being human.
        
       | mattsears wrote:
       | Peak Hacker News material
        
       | montebicyclelo wrote:
       | My thoughts are that it's key that humans know they will _still
       | get credit_ for their contributions.
       | 
       | E.g. imagine it was the case that you could write a blog post,
       | with some insight, in some niche field - but you know that
       | traffic isn't going to get directed to your site. Instead, an LLM
       | will ingest it, and use the material when people ask about the
       | topic, without giving credit. _If you know that will happen, it
       | 's not a good incentive to write the post in the first place._
       | You might think, "what's the point".
       | 
       | Related to this topic - computers have been superhuman at chess
       | for 2 decades; yet good chess humans still get credit,
       | recognition, and I would guess, satisfaction, from achieving the
       | level they get to. Although, obviously the LLM situation is on a
       | whole other level.
       | 
       | I guess the main (valid) concern is that LLMs get so good at
       | thought that humans just don't come up with ideas as good as
       | them... And can't execute their ideas as well as them... And then
       | what... (Although that doesn't seem to be the case currently.)
        
         | datpuz wrote:
         | > I guess the main (valid) concern is that LLMs get so good at
         | thought
         | 
         | I don't think that's a valid concern, because LLMs can't think.
         | They are generating tokens one at a time. They're calculating
         | the most likely token to appear based on the arrangements of
         | tokens that were seen in their training data. There is no
         | thinking, there is no reasoning. If they they seem like they're
         | doing these things, it's because they are producing text that
         | is based on unknown humans who actually did these things once.
        
           | montebicyclelo wrote:
           | > LLMs can't think. They are generating tokens one at a time
           | 
           | Huh? They are generating tokens one at a time - sure that's
           | true. But who's shown that predicting tokens one at a time
           | precludes thinking?
           | 
           | It's been shown that the models plan ahead, i.e. think more
           | than just one token forward. [1]
           | 
           | How do you explain the world models that have been detected
           | in LLMs? E.g. OthelloGPT [2] is just given sequences of games
           | to train on, but it has been shown that the model learns to
           | have an internal representation of the game. Same with
           | ChessGPT [3].
           | 
           | For tasks like this, (and with words), _real thought is
           | required to predict the next token well_ ; e.g. if you don't
           | understand chess to the level of Magnus Carlsen, how are you
           | going to predict Magnus Carlsen's next move...
           | 
           | ...You wouldn't be able to, even just from looking at his
           | previous games; you'd have to actually understand chess, and
           | _think_ about what would be a good move, (and in his style).
           | 
           | [1] https://www.anthropic.com/research/tracing-thoughts-
           | language...
           | 
           | [2] https://www.neelnanda.io/mechanistic-
           | interpretability/othell...
           | 
           | [3] https://adamkarvonen.github.io/machine_learning/2024/01/0
           | 3/c...
        
       | binary132 wrote:
       | Ok, nice article. Now prove an AI didn't write it.
       | 
       | As a matter of fact I'm starting to have my doubts about the
       | other people writing glowing, longwinded comments on this
       | discussion.
        
       | zzzbra wrote:
       | Reminds me of this:
       | 
       | "There are no shortcuts to knowledge, especially knowledge gained
       | from personal experience. Following conventional wisdom and
       | relying on shortcuts can be worse than knowing nothing at all."
       | -- Ben Horowitz
        
       | armchairhacker wrote:
       | I feel similar, except not because of AI but the internet. Almost
       | all my knowledge and opinions have already been explained by
       | other people who put in more effort than I could. Anything I'd
       | create, art or computation, high-quality similar things already
       | exist. Even this comment echoes similar writing.
       | 
       |  _Almost_. _Similar_. I still make things because sometimes what
       | I find online (and what I can generate from AI) isn 't "good
       | enough" and I think I can do better. Even when there's something
       | similar that I can reuse, I still make things to develop my
       | skills for further occasions when there isn't.
       | 
       | For example, somebody always needs a slightly different
       | JavaScript front-end or CRM, even though there must be hundreds
       | (thousands? tens-of-thousands?) by now. There are many
       | programming languages, UI libraries, operating systems, etc. and
       | some have no real advantages, but many do and consequently have a
       | small but dedicated user group. As a PhD student, I learn a lot
       | about my field only to make a small contribution*, but chains of
       | small contributions lead to breakthroughs.
       | 
       | The outlook on creative works is even more optimistic, because
       | there will probably never be enough due to desensitization.
       | People watch new movies and listen to new songs not because
       | they're _better_ but because they 're _different_. AI is
       | especially bad at creative writing and artwork, probably because
       | it fundamentally generates  "average"; when AI art is good, it's
       | because the human author gave it a creative prompt, and when AI
       | art is _really_ good, it 's because the human author manually
       | edited it post-generation. (I also suspect that when AI gets
       | creative, people will become even more creative to compensate,
       | like how I suspect today we have more movies that defy tropes and
       | more video games with unique mechanics; but there's probably a
       | limit, because something can only be so creative before it's
       | random and/or uninteresting.)
       | 
       | Maybe one day AI can automate production-quality software
       | development, PhD-level research, and human-level creativity. But
       | IME today's (at least publicly-facing) models really lack these
       | abilities. I don't worry about when AI _is_ powerful enough to
       | produce high-quality outputs (without specific high-quality
       | prompts), because assuming it doesn 't lead to an apocalypse or
       | dystopia, I believe the advantages are so great, the loss of
       | human uniqueness won't matter anymore.
       | 
       | * Described in https://matt.might.net/articles/phd-school-in-
       | pictures/
        
       | rollinDyno wrote:
       | I recently finished my PhD studies in social sciences. Even
       | though it did not lead me to career improvements as I initially
       | expected, I am happy I had the opportunity to undertake an
       | academic endeavor before LLMs became cheap and ubiquitous.
       | 
       | I bring up my studies because what the author is talking about
       | strikes me as not having been ambitious enough in his thinking.
       | If you prompt current LLMs with your idea and find the generated
       | arguments and reasoning satisfactory, then you aren't really
       | being rigorous or you're not having big enough ideas.
       | 
       | I say this confidently because my studies showed me not only the
       | methods in finding and contrasting evidence around any given
       | issue, but also how much more there is to learn about the
       | universe. So, if you're being rigorous enough to look at
       | implications of your theories, finding datapoints that speak to
       | your conclusions and find that your question has been answered,
       | then your idea is too small for what the state of knowledge is in
       | 2025.
        
       | analog31 wrote:
       | Something I'm on the fence about, but just trying to figure out
       | from observation, is whether the AI can decide what is
       | _worthwhile_. It seems like most of the successes of AI that I
       | 've seen are cases where someone is tasked with writing something
       | that's not worth reading.
       | 
       | Granted, that happened before AI. The vast majority of text in my
       | in-box, I never read. I developed heuristics for deciding what to
       | ignore. "Stuff that looks like it was probably generated" will
       | probably be a new heuristic. It's subjective for now. One clue is
       | if it seems more literate than the person who wrote it.
       | 
       | Stuff that's written for school falls into that category. It
       | existed for some reason other than being read, such as the hope
       | that the activity of writing conferred some educational benefit.
       | That was a heuristic too -- a rule of thumb for how to teach,
       | that has been broken by AI.
       | 
       | Sure, AI can be used to teach a job skill, which is writing text
       | that's not worth reading. Who wants to be the one who looks the
       | kids in the eye and explain this to them?
       | 
       | On the other hand, I do use Copilot now, where I would have used
       | Stackoverflow in the past.
        
       | largbae wrote:
       | This feels off, as if thinking were like chess and the game is
       | now solved and over.
       | 
       | What if you could turn your attention to much bigger things than
       | you ever imagined before? What if you could use this new
       | superpower to think more not less, to find ways to amplify your
       | will and contribute to fields that were previously out of your
       | reach?
        
       | sneak wrote:
       | > _This post was written entirely by a human, with no assistance
       | from AI. (Other than spell- and grammar-checking.)_
       | 
       | That's not what "no assistance" means.
       | 
       | I'm not nitpicking, however - I think this is an important point.
       | The very concept of what "done completely by myself" means is
       | shifting.
       | 
       | The LLMs we have today are vastly better than the ones we had
       | before. Soon, they will be even better. The complaint he makes
       | about the intellectual journey being missing might be alleviated
       | by an AI as intellectual sparring partner.
       | 
       | I have a feeling this post basically just aliases to "they can
       | think and act much faster than we can". Of course it's not as
       | good, but 60-80% as good, 100x faster, might be net better.
        
       | iamwil wrote:
       | > But it doesn't really help me know anything new.
       | 
       | Pursue that, since that's what LLMs haven't been helping you
       | with. LLMs haven't really generated new knowledge, though there
       | are hints of it--they have to be directed. There are two or three
       | times when I felt the LLM output was really insightful without
       | being directed.
       | 
       | --
       | 
       | At least for now, I find the stuff I have a lot of domain
       | expertise in, the LLM's output just isn't quite up to snuff. I do
       | a lot of work trying to get it to generate the right things with
       | the right taste, and even using LLMs to generate prompts to feed
       | into other LLMs to write code, and it's just not quite right.
       | Their work just seems...junior.
       | 
       | But for the stuff that I don't really have expertise in, I'm less
       | discerning of the exact output. Even if it is junior, I'm
       | learning from the synthesis of the topic. Since it's usually a
       | means to an end to support the work that I do have expertise in,
       | I don't mind that I didn't do that work.
        
       | iamwil wrote:
       | OP said something similar about writing blog posts when he found
       | himself doing twitter a lot, back in 2013. So whatever he did to
       | cope with tweeting, he can do the same with LLMs, since it seems
       | like he's been writing a lot of blog posts since.
       | 
       | > I've been thinking about this damn essay for about a year, but
       | I haven't written it because Twitter is so much easier than
       | writing, and I have been enormously tempted to just tweet it, so
       | instead of not writing anything, I'm just going to write about
       | what I would have written if Twitter didn't destroy my desire to
       | write by making things so easy to share.
       | 
       | and
       | 
       | > But here's the worst thing about Twitter, and the thing that
       | may have permanently destroyed my mind: I find myself walking
       | down the street, and every fucking thing I think about, I also
       | think, "How could I fit that into a tweet that lots of people
       | would favorite or retweet?"
       | 
       | https://dcurt.is/what-i-would-have-written
        
       | ryankrage77 wrote:
       | > All of my original thoughts feel like early drafts of better,
       | more complete thoughts that simply haven't yet formed inside an
       | LLM
       | 
       | I would like access to whatever LLM the author is using, because
       | I cannot relate to this at all. Nearly all LLM output I've ever
       | generated has been average, middle-of-the-road predictable slop.
       | Maybe back in the GPT-3 days before all LLMs were RLHF'd to
       | death, they could sometimes come up with novel (to me) ideas, but
       | nowadays often I don't even bother actually sending the prompt
       | I've written, because I have a rough idea of what the output is
       | going to be, and that's enough to hop to the next idea.
        
       | ay wrote:
       | Very strange. Either the author uses some magic AI, or I am
       | holding it wrong. I used LLMs since a couple of years, as a nice
       | tool.
       | 
       | Besides that:
       | 
       | I have tried using LLMs to create cartoon pictures. The first
       | impression is "wow"; but after a bunch of pictures you see the
       | evidently repetitive "style".
       | 
       | Using LLMs to write poetry results is also quite cool at first,
       | but after a few iterations you see the evidently repetitive
       | "style", which is bland and lacks depth and substance.
       | 
       | Using LLMs to render music is amazing at first, but after a while
       | you can see the evidently repetitive style - for both rhymes and
       | music.
       | 
       | Using NotebookLM to create podcasts at first feels amazing, about
       | to open the gates of knowledge; but then you notice that the flow
       | is very repetitive, and that the "hosts" don't really show enough
       | understanding to make it interesting. Interrupting them with
       | questions somewhat dilutes this impression, though, so jury is
       | out here.
       | 
       | Again, with generating the texts, they get a distant metallic
       | taste that is hard to ignore after a while.
       | 
       | The search function is okay, but with a little bit of nudge one
       | can influence the resulting answer by a lot, so I wary if blindly
       | taking the "advice", and always recheck it, and try to make two
       | competing where I would influence LLM into taking the competing
       | viewpoints and learn from both.
       | 
       | Using the AI to generate code - simple things are ok, but for
       | non-trivial items it introduces pretty subtle bugs, which require
       | me to ensure I understand every line. This bit is the most fun -
       | the bug quest is actually entertaining, as it is often the same
       | bugs humans would make.
       | 
       | So, I don't see the same picture, but something close to the
       | opposite of what the author sees.
       | 
       | Having an easy outlet to bounce the quick ideas off and a source
       | of relatively unbiased feedback brought me back to the fun of
       | writing; so literally it's the opposite effect compared to the
       | article author...
        
         | jstummbillig wrote:
         | Maybe you are not that great at using the most current LLMs or
         | you don't want to be? I find that increasingly to be the most
         | likely answer, whenever somebody makes sweeping claims about
         | the impotence of LLMs.
         | 
         | I get more use out of them every single day and certainly with
         | every model release (mostly for generating absolutely not
         | trivial code) and it's not subtle.
        
       | deepsun wrote:
       | > instantly get a fully reasoned, researched, and completed
       | thought
       | 
       | That's not my experience though. I tried several models, but
       | usually get a confident half-baked hallucination, and tweaking my
       | prompt takes more time than finding the information myself.
       | 
       | My requests are typically programming tho.
        
       | qudat wrote:
       | Man people are using LLMs much differently from myself. I use it
       | as an answer engine and that's about where I stop using it. It's
       | a great tool for research and quick answers but I haven't found
       | it great for much else.
        
         | j7ake wrote:
         | It's a faster Google search
        
       | thorum wrote:
       | If your goal is to have original and creative thoughts then you
       | need to fully explore an idea _on your own_ before getting
       | someone else's feedback: whether human or AI.
       | 
       | Feedback from other humans is great but will tend to nudge you
       | toward the current closest mainstream idea, whatever everyone
       | else is doing or thinking about.
       | 
       | Feedback from AI is just painfully bland. It's useful later on as
       | a research tool and to spot errors in your thinking. But "Give
       | the AI your half finished thought and let it finish it" is
       | unlikely to produce anything that matters.
       | 
       | Disconnect. Put your phone out of reach. Just you, your mind, and
       | maybe a pencil and paper. See how far you can go with just your
       | brain.
       | 
       | Only after you've gone as far as you can should you bring in
       | outside input. That's how you get to ideas that actually matter.
        
       | lcsiterrate wrote:
       | I wonder if there is any like the cs50.ai for other fields, which
       | acts as a guide by not instantly giving any answer. The one I
       | experiment to have like this experience is by using custom
       | instructions (most of LLMs have this in the settings) wherein I
       | set it to Socratic mode so the LLM will spit out questions to
       | stir ideas in my head.
        
       ___________________________________________________________________
       (page generated 2025-05-16 23:00 UTC)