[HN Gopher] Ask HN: Burnout because of ChatGPT?
       ___________________________________________________________________
        
       Ask HN: Burnout because of ChatGPT?
        
       TL;DR (summarised by ChatGPT) - I'm experiencing increased
       productivity and independence with ChatGPT but grappling with
       challenges such as lack of work-life boundaries and overwhelming
       information, leading to stress and burnout.  Long story...  I have
       been using ChatGPT for a while, and moved to the Plus subscription
       for their GPT-4 model, which I must say, is quite good.  1. ChatGPT
       makes us very productive. Personally, in my early 40s, I feel my
       brain is back in 20s.  2. I no longer feel the need to hire
       juniors. This is a short-term positive and maybe a long-term
       negative. [[EDIT: I may have implied a wrong meaning. To clarify -
       nobody's going yet because of ChatGPT. It is just raising the bar
       high and higher. What took me years to learn, this thing can do
       already and much more. And I cannot predict the financial future of
       OpenAI or the markets in general.]]  A lot of stuff I used to
       delegate to fellow humans are now being delegated to ChatGPT. And I
       can get the results immediately and at any time I want. I agree
       that it cannot operate on its own. I still need to review and
       correct things. I have do that even when working with other humans.
       The only difference is that I can start trusting a human to
       improve, but I cannot expect ChatGPT to do so. Not that it is
       incapable, but because it is restricted by OpenAI.  And I have
       gotten better at using it. Calling myself a prompt-engineer sounds
       weird.  With all the good, I am now experiencing the cons, stress
       and burnout:  1. Humans work 9-5 (or some schedule), but ChatGPT is
       available always and works instantly. Now, when I have some idea I
       want to try out - I start working on it immediately with the help
       of AI. Earlier I just used to put a note in the todo-list and stash
       it for the next day.  2. The outputs with ChatGPT are so fast, that
       my "review load" is too high. At times it feels like we are working
       for ChatGPT and not the other way around.  3. ChatGPT has the habit
       of throwing new knowledge back at you. Google does that too, but
       this feels 10x of Google. Sometimes it is overwhelming. Good thing
       is we learn a lot, bad thing is that if often slows down our
       decision making.  4. I tried to put a schedule to use it - but when
       everybody has access to this tech, I have a genuine fear of missing
       out.  5. I have zero doubt that AI is setting the bar high, and it
       is going to take away a ton of average-joe desk jobs. GPT-4 itself
       is quite capable and organisations are yet to embrace it.  And not
       the least, it makes me worry - what lies with the future models. I
       am not a layman when it comes to AI/ML - have worked with it until
       the past few years in the pre-GPT era.  Has anybody experienced
       these issues? And how do you deal with those?  * I could not resist
       asking ChatGPT the above - couple of strategies it told me were to
       "Seek Support from Others" and "Participating in discussions or
       groups focused on ethical AI". *
        
       Author : smdz
       Score  : 61 points
       Date   : 2023-08-14 20:10 UTC (2 hours ago)
        
       | bussyfumes wrote:
       | Don't discard the juniors, maybe ask them to process your prompts
       | for you. That'll give you some space.
       | 
       | I feel the opposite: I had a great experience asking GPT-4 to do
       | some tasks for me and have been feeling like I'm missing out ever
       | since by not using it more often.
       | 
       | However, I'm wary of posting work-related code into it so I
       | either have to come up with similar examples, which is time-
       | consuming or ask it conceptual questions for which I haven't been
       | able to make it much helpful. Sometimes I even noticed that a
       | conversation with a colleague produced a much better result and
       | it wasn't even something very specific to the project. So yeah, I
       | feel like it's a great tool but I'm having a hard time using it
       | productively. It definitely feels like being creative with your
       | prompts is an important part of getting value out of it.
        
         | willsmith72 wrote:
         | > I'm wary of posting work-related code into it
         | 
         | I'm always curious when I see this. Is it about potential IP in
         | the code? References to clients in the code? Secrets?
         | 
         | In my last job they were worried about it too, but decided the
         | cons outweighed the pros. Some of our code was client-specific
         | (CanvaMapper etc.), but we would remove brand names and then go
         | for it.
        
         | throitallaway wrote:
         | > Don't discard the juniors, maybe ask them to process your
         | prompts for you. That'll give you some space.
         | 
         | GPT can give incorrect, bad, or non-functional code. A senior
         | engineer that reviews GPT responses will (hopefully) spot that
         | and rectify it right away. Junior engineers can end up being
         | less productive and not learn a lot when encountering this.
        
       | egoregorov wrote:
       | I've been exposed more to pipeline automation than GPT4, like
       | argoCI/CD, and can see that taking away a lot of jobs. If
       | customers are connected to the code repository then developers
       | just need to push commits and argo will take care of the rest,
       | including getting the latest code to the customer.
       | 
       | *This may be a bit of an oversimplification, but argo showed me
       | that the whole pipeline can be automated
        
       | obblekk wrote:
       | I've experienced a very similar feeling.
       | 
       | To me it feels exactly like finding wikipedia in 2005, or getting
       | an iphone + wikipanion in 2008. The frontiers of my mind have
       | been unleashed. A real bicycle for the mind.
       | 
       | Here are some tactics I use to "turn off gpt":
       | 
       | 1. It'll be there tomorrow. The great thing about their threaded
       | model is you can easily find the convo and continue it tomorrow.
       | Remind yourself of that consciously (or tape it to your monitor!)
       | 
       | 2. You're not behind, you're ahead. 80% of Americans haven't
       | tried chatgpt. 95% of the world maybe.
       | 
       | 3. Don't worry about juniors. They'll still be hired because now
       | they'll ramp up faster and produce better code, using the same
       | tool you're using. Same thing that happened when stackoverflow
       | became popular and junior devs stopped "reading the source code"
       | or "reading man pages."
       | 
       | For all the limitations of GPT4, it truly is great at coding.
       | Exciting times.
        
         | purplecats wrote:
         | > 2. You're not behind, you're ahead. 80% of Americans haven't
         | tried chatgpt. 95% of the world maybe.
         | 
         | idk if anyone realistically compares themselves to the abstract
         | nebulous "everyone". its likely moreso in regards to their
         | socioeconomic band
        
       | footy wrote:
       | > Calling myself a prompt-engineer sounds weird.
       | 
       | I'll be honest, someone calling themself this sounds to me like
       | someone with no self respect.
       | 
       | > The only difference is that I can start trusting a human to
       | improve, but I cannot expect ChatGPT to do so. Not that it is
       | incapable, but because it is restricted by OpenAI.
       | 
       | Sounds like a good reason to hire juniors.
        
       | littlestymaar wrote:
       | > ChatGPT makes us very productive. Personally, in my early 40s,
       | I feel my brain is back in 20s.
       | 
       | Is it supposed to be a good thing?
        
       | sorokod wrote:
       | > I no longer feel the need to hire juniors.
       | 
       | This seems to be contradicted by the text that follows.
        
         | [deleted]
        
         | sorokod wrote:
         | it is not clear what is it that you are actually doing but
         | consider the possibility that there is a need to replace you by
         | a few juniors+LLM
        
         | mechagodzilla wrote:
         | This seems like a pretty good take - you found that you could
         | 'get rid of juniors' by working in an unsustainable way. Why
         | not work with now-super-productive junior employees that can
         | spread the cognitive load?
         | 
         | It would seem odd if this were the one time in the history of
         | computing where a big productivity boost didn't just lead to
         | increasingly big/complex software.
        
           | martindbp wrote:
           | > Why not work with now-super-productive junior employees
           | that can spread the cognitive load?
           | 
           | It's all about scaling up or out. Having co-workers is like
           | scaling out, you have to worry about aligning your goals,
           | there's a lot of communication overhead in general. Using
           | ChatGPT is like scaling up, you're just upgrading your skills
           | and intelligence and you still have very low latency as there
           | is only one mind in control.
        
             | mechagodzilla wrote:
             | Sure - it's just like moving to a better IDE, or a higher
             | level programming language, or a 10x faster CPU (which has
             | happened a couple times now in my career), or a better
             | compiler. All of those things just increased the
             | expectations and ambitions for what a team could
             | accomplish/manage though.
        
             | brandon272 wrote:
             | As the OP's problems suggest, you can only scale "up" with
             | ChatGPT to a certain point before it starts to introduce
             | new productivity problems and even problems related to
             | burnout and being able to properly digest and understand
             | everything being thrown at you. With respect to teams, it
             | also introduces risk by ensuring that larger workloads that
             | may previously have been shared by multiple people are
             | concentrated to a single person.
        
       | fleeno wrote:
       | Can you elaborate on how you're actually using ChatGPT? I'm a
       | developer and I haven't felt any need to use ChatGPT constantly.
       | 
       | What tasks are you delegating to ChatGPT that were previously
       | done by humans? Most of my input from others is regarding current
       | information specific to the task at hand. I don't see how ChatGPT
       | would have any idea what I'm talking about.
       | 
       | Do you have some specific examples you could share?
        
         | simonw wrote:
         | I have a bunch of examples myself. Here's a good recent one
         | (prompts are linked about half way down the post):
         | https://simonwillison.net/2023/Aug/6/annotated-presentations...
         | 
         | A few more:
         | 
         | - "Write a Python script with no extra dependencies which can
         | take a list of URLs and use a HEAD request to find the size of
         | each one and then add those all up"
         | https://simonwillison.net/2023/Aug/3/weird-world-of-llms/#us...
         | 
         | - "Show me code examples of different web frameworks in Python
         | and JavaScript and Go illustrating how HTTP routing works - in
         | particular the problem of mapping an incoming HTTP request to
         | some code based on both the URL path and the HTTP verb"
         | https://til.simonwillison.net/gpt3/gpt4-api-design
         | 
         | - "JavaScript to prepend a <input type="checkbox"> to the first
         | table cell in each row of a table"
         | https://til.simonwillison.net/datasette/row-selection-protot...
         | 
         | - "Write applescript to loop through all of my Apple Notes and
         | output their contents"
         | https://til.simonwillison.net/gpt3/chatgpt-applescript
        
         | interstice wrote:
         | After putting some thought into this I think it has to do with
         | the kind of developer you are. In my case I'm usually across
         | 10-20 ecommerce websites doing various semi-unique jobs with
         | relatively simple code.
         | 
         | Largely I use CGPT for work that's boilerplate/LOC heavy but
         | architecture light, things like writing first drafts of React
         | hooks and the like. It's quite good with constraints like use
         | typescript or use X function to do Y.
         | 
         | I usually give it about two goes if it goes in the wrong
         | direction on the first try. If it seems to not conceptually
         | understand what I'm asking I generally just write it directly
         | rather than tinkering with prompts for 20 minutes.
         | 
         | I also have a couple of longer system prompts saved for
         | converting Vue components to React using the house style and
         | things like that using the playground.
        
         | selestify wrote:
         | I would love to know this too. For me it's involved too much
         | manual copy-pasting of existing code for context, for it to
         | feel like it's doing much for me.
        
         | claytongulick wrote:
         | I'd love to understand this too - my experience has been that I
         | can generally write what I want faster than figuring out what
         | prompt will get something close to right, and then
         | editing/revising it to make it right.
         | 
         | Add to this the limited usefulness for generating code that's
         | contextual - making some method deep inside a component tree
         | that needs to reference a service class, and pick some dom
         | elements to mutate etc... it requires knowledge and reasoning
         | about the project and overall code structure.
         | 
         | I don't understand how folks are using it as a productivity
         | booster, unless maybe as something like a better StackOverflow?
        
         | [deleted]
        
       | tivert wrote:
       | > 2. I no longer feel the need to hire juniors. This is a short-
       | term positive and maybe a long-term negative.
       | 
       | > A lot of stuff I used to delegate to fellow humans are now
       | being delegated to ChatGPT. And I can get the results immediately
       | and at any time I want. I agree that it cannot operate on its
       | own. I still need to review and correct things. I have do that
       | even when working with other humans. The only difference is that
       | I can start trusting a human to improve, but I cannot expect
       | ChatGPT to do so. Not that it is incapable, but because it is
       | restricted by OpenAI.
       | 
       | I think this point bears repeating.
       | 
       | The threat of these models isn't that they'll go all Skynet and
       | kill everyone, it's that they'll cause a lot of economic
       | devastation to people who make a living through labor requiring
       | skill and knowledge, _especially future generations of skilled
       | labor_. Then there will be a decision point: either the senior-
       | level people who thought they were safe get replaced by a more-
       | advanced model, or they don 't and there's a future society-level
       | shortage because the pipeline to produce _more_ senior-level
       | people has been shut down (like the OP is doing).
       | 
       | The only people who will come out (relatively) unscathed are the
       | ownership class, like always.
       | 
       | Of course, this is inevitable because it's impossible to question
       | or change our society's ideological assumptions. They must be
       | played out until they utterly destroy society.
        
         | dragonwriter wrote:
         | > Then there will be a decision point: either the senior-level
         | people who thought they were safe get replaced by a more-
         | advanced model, or they don't and there's a future society-
         | level shortage because the pipeline to produce more senior-
         | level people has been shut down (like the OP is doing).
         | 
         | Or, for every junior that isn't hired by a business that can't
         | expand its portfolio to exploit greater productivity or can't
         | figure out how to effectively use LLMs across the experience
         | spectrum, two will be hired in shops that can do those things,
         | and, as with previous software dev productivity increases,
         | greater productivity in the field will mean a broader range of
         | viable applications and more total jobs across all experience
         | levels.
        
           | coldtea wrote:
           | > _Or, for every junior that isn 't hired by a business that
           | can't expand its portfolio to exploit greater productivity or
           | can't figure out how to effectively use LLMs across the
           | experience spectrum, two will be hired in shops that can do
           | those things_
           | 
           | And everybody also gets a pony! Win-win-win situation!
           | 
           | Previous "software dev productivity increases" happened as
           | computing saturation itself increased from a hanful of
           | mainframes to one in every office, then at every desk, then a
           | few in every home, and later one in every hand. Now it's at
           | 100% or close.
           | 
           | It also still required computer operators. LLM are not mere
           | increased productivity of a human computer operator, but
           | automation of productivity so that it can happen without an
           | operator (or with much fewer).
           | 
           | Moreover, all this "increased productivity" still left wage
           | stagnant for 40 years (with basic costs like housing,
           | education, healthcare skyrocketing). It's not like more of
           | it, in the same old corporatism context, bodes better for the
           | future...
        
             | dragonwriter wrote:
             | > LLM are not mere increased productivity or the computer
             | operator, but automation of productivity so that it can
             | happen without an operator (or with much fewer).
             | 
             | Enabling the same production with fewer workers (or,
             | equivalently, greater production with the same number of
             | workers) is the definition of a productivity increase, not
             | something that constitutes a difference in kind from a
             | normal productivity increase.
             | 
             | > Moreover, all this "increased productivity" still left
             | wage stagnant for 40 years
             | 
             | Not in computing it didn't. Same job category pay rose in
             | real terms over almost any window you choose in the last 50
             | years, and in most cases the distribution of jobs _also_
             | moved over time from lower-paid to higher-paid categories
             | within computing.
             | 
             | Also, even general real wages didn't really stagnate for 40
             | years, average (mean) wages _dropped_ slowly for 20 --
             | mid-70s to mid-90s, and mostly have slowly climbed since,
             | crossing over about 30-ish years after the past peak, but
             | the same effect isn 't seen in median wages (though that
             | also was low in the early 1980s and most of the 1990s,
             | before mostly rising strongly) or median personal income
             | (which, despite short drops around recessions, has been
             | rising consistently strongly since the 1981 trough.)
        
               | coldtea wrote:
               | > _Enabling the same production with fewer workers (or,
               | equivalently, greater production with the same number of
               | workers) is the definition of a productivity increase,
               | not something that constitutes a difference in kind from
               | a normal productivity increase._
               | 
               | Of course. But "greater production with the same number
               | of workers" vs "the same production with fewer workers"
               | is already a difference in quantity (of both production,
               | and, the thing pertinent to the discussion, of workers).
               | 
               | And there's also "greater production with fewer workers"
               | - where you get to have your employer pie (fewer workers)
               | and eat it too (still get greater production).
        
         | cheschire wrote:
         | You can witness this happening in the trades right now. A whole
         | generation of people were told to goto college and to avoid the
         | trades, and now here we are in possibly the most significant
         | manpower drought the trades have ever experienced. And this has
         | a ripple effect as the older generations retire out, and take
         | their hard won experiences with them with nobody to pass their
         | knowledge onto. Can't tell a carpenter to go type that shit
         | into Confluence, let alone tell the kid to look in the
         | knowledge base first.
        
           | StevePerkins wrote:
           | And yet the trades still have uneven access (or none at all)
           | to health coverage, retirement planning options, etc.
           | 
           | As an American parent of young children, I keep being told
           | that college is a scam and I should steer my kids toward the
           | trades. 90+% of the time, I am being told this by a white-
           | collar worker who went to college themselves, and is just
           | bloviating.
           | 
           | When we reach a real crisis point, severe enough to actually
           | consider granting skilled tradespeople access to a fraction
           | of the privilege enjoyed by white-collar workers, then I
           | might consider nudging my kids toward electrician or plumbing
           | work. But under the current social caste system, of course I
           | am going to do everything possible to give my kids access to
           | college and steer them that way.
           | 
           | I believe that virtually everyone, white-collar and blue-
           | collar alike, quietly feels likewise. We make a pretense of
           | giving contrary advice, but mostly just in hopes that other
           | people will move in that direction for us. To take the bullet
           | and help with this imbalance, and also to relieve the intense
           | competition our own kids face.
        
             | the_only_law wrote:
             | > I am being told this by a white-collar worker who went to
             | college themselves, and is just bloviating.
             | 
             | Exactly. When I talk to plumbers, electricians, etc. many
             | of them express the desire to leave because the hours and
             | environments are hellish. Meanwhile some full of themselves
             | tech bro is babbling on about how everyone (not them of
             | course) about how everyone should go into the trades. Or
             | they pull some vague anecdote out of their ass about how
             | someone they know makes a gazllion dollars in the trades
             | after 20 years and starting their own business, which is
             | about as valid as telling someone to go into software
             | development because they can become a billionaire, and
             | throwing out some anecdote about a startup founder they
             | know who got aquired.
        
         | obblekk wrote:
         | I don't think so. Junior engineers will learn much much faster
         | than the past (think about how much more effective GPT4 is as a
         | learning tool than the "rubber ducky method" or manpages or
         | even stackoverflow).
         | 
         | And a part of their role will morph into prompting GPT4 (much
         | like this senior engineer has started doing).
         | 
         | If GPTx ends up in the narrow area where it's universally
         | smarter than junior engs but definitely not capable of being a
         | senior eng, then junior engs will just shift to the little
         | remaining work for senior engs, shadow them for months to years
         | like an apprenticeship.
         | 
         | Of course in that case the total number of eng needed will also
         | decrease (already only a small percent ever get good enough to
         | be considered truly senior), so there will be selection bias
         | toward more intelligent engineers who are a step above GPTx. If
         | none are left, then the profession will be gone and there will
         | be no problem.
        
         | Dalewyn wrote:
         | >it's that they'll cause a lot of economic devastation to
         | people who make a living through labor requiring skill and
         | knowledge, especially future generations of skilled labor.
         | 
         | If a task can be completed satisfactorily by an automated
         | computer program, was the task really "skilled labor"?
         | 
         | I ask this sincerely, because some of the occupations being
         | replaced/evicted (eg: copywriting) were clearly given more
         | skill value than they should have.
        
         | JohnFen wrote:
         | > The threat of these models isn't that they'll go all Skynet
         | and kill everyone, it's that they'll cause a lot of economic
         | devastation to people who make a living
         | 
         | Yes. This is pretty much my only concern about these models,
         | and I'm _powerfully_ concerned about this. It 's hard to see
         | how this will lead to a good place. It seems more likely that
         | this will lead to increased poverty and multiple socioeconomic
         | crises.
         | 
         | I am even more concerned that very few people are talking about
         | this, and none of the power players in this space are, except
         | for occasional mentions in passing of fantasies like UBI.
        
           | bloppe wrote:
           | > I am even more concerned that very few people are talking
           | about this.
           | 
           | People have been talking about the threat of automation since
           | the very beginning of the industrial revolution. It just
           | never plays out nearly that badly, and short-term disruptions
           | are always outweighed by long-term efficiency gains within ~5
           | years or so; even those who experience the worst career
           | disruption tend to end up better off within that time frame.
           | 
           | I certainly would not like for my career to be disrupted for
           | ~5 years, but the alternative would be worse.
        
             | JohnFen wrote:
             | > even those who experience the worst career disruption
             | tend to end up better off within that time frame.
             | 
             | That hasn't been my observation at all. In the US, there
             | are large swaths of the nation that _still_ haven 't
             | recovered from the last similar event.
             | 
             | To add additional worry, the last time, everyone was told
             | that the way out was to "upskill" into knowledge and
             | service industries. Which a lot of people did, and those
             | people were fine. But what are people to do this time?
             | "Upskilling" back to physical jobs can only absorb so many
             | workers, particularly since there aren't as many such jobs
             | as there used to be.
             | 
             | This is all why I'm so concerned. I don't think history
             | gives us any real reason to be optimistic here. In the very
             | long term -- a couple of generations, say -- perhaps. But
             | in the meantime? Even ignoring the ethics of some people
             | deciding that others are expendable, the people being
             | kicked to the curb will still have to find a way to eat,
             | keep a roof over their head, etc.
             | 
             | If even 10-20% of the population can't do that, we're in
             | big trouble.
        
           | Animats wrote:
           | I've been asking for years, if we have all these computers,
           | why do we need so many people in offices? Now we seem to have
           | passed "peak office", with much help from the pandemic.
           | 
           | If everything you do for money goes in and out over a wire,
           | be very afraid.
        
             | waffletower wrote:
             | From my vantage, short-term (3-5 year) fear seems
             | unfounded. As a software engineer, I can clearly see what
             | ChatGPT and its LLM ilk can and can't do that I can easily
             | do myself. LLMs clearly accelerate my access to API
             | documentation and provide excellent outline code. But
             | hallucinations are omnipresent and can often necessitate
             | additional iteration and rethinking of development
             | approaches. I think the productivity boost due to LLM usage
             | is smaller than many credit them for. The intensity of
             | employment displacement fear comes from an illusion that
             | LLMs have agency. AutoGPT is not much more than an
             | experimental repo and there isn't a viable alternative yet.
             | "WHEN YOU COMMAND AN LLM, YOU ARE THE AGENCY", LLMs are
             | mere extensions. Don't sell yourself short, prompt
             | crafting/engineering is where the agency lies and requires
             | real knowledge and context to empower you effectively use
             | them for successful software engineering.
        
               | JohnFen wrote:
               | > The intensity of employment displacement fear comes
               | from an illusion that LLMs have agency.
               | 
               | I don't think so. My concerns have nothing to do with
               | agency, anyway. Nor are my concerns limited to (or even
               | primarily about) impact on software engineering
               | specifically.
               | 
               | Even if LLMs perform worse, if using them will save
               | companies money over employing people, then those people
               | are gone.
        
         | Animats wrote:
         | > I no longer feel the need to hire juniors.
         | 
         | I hear that from a friend in the legal business. Less need for
         | paralegals. Unclear yet if the need for new lawyers will be
         | reduced.
        
         | FpUser wrote:
         | I use ChatGPT very actively for programming among the other
         | things and at no point feel threatened, rather empowered. No
         | burnout either as I just work as usual. It just replaced Google
         | Search and lots of typing.
        
       | mupuff1234 wrote:
       | What are you working on that's so urgent?
       | 
       | The answer is probably that it's not.
        
       | Jtsummers wrote:
       | > 1. Humans work 9-5 (or some schedule), but ChatGPT is available
       | always and works instantly. Now, when I have some idea I want to
       | try out - I start working on it immediately with the help of AI.
       | Earlier I just used to put a note in the todo-list and stash it
       | for the next day.
       | 
       | This is a time management problem and a setting boundaries
       | problem. When I leave work, if I have an idea (work related) I
       | jot it into a notebook to review the next day. After I leave (no
       | later than 1630 every day) I am not obligated to work, so I
       | don't. I exercise, read, study, spend time with my wife, play
       | with the cats, whatever I feel like doing. If they want me to
       | work 24x7, they can increase my pay by 20x because they'll only
       | get a year of use out of me and I can retire with that income in
       | a year or so without issue. They pay for 8 hours, they get 8
       | hours.
       | 
       | > 2. The outputs with ChatGPT are so fast, that my "review load"
       | is too high. At times it feels like we are working for ChatGPT
       | and not the other way around.
       | 
       | Then slow down. See my response to (1). Your time management
       | skills are in desperate need of development. Ask less of ChatGPT.
       | Only ask enough to complete an objective, no more. Don't ask it
       | for information faster than you can process it. And if you feel
       | the need to ask it a million and one questions, delegate
       | processing its responses to others (bring back your juniors).
       | 
       | > 3. ChatGPT has the habit of throwing new knowledge back at you.
       | Google does that too, but this feels 10x of Google. Sometimes it
       | is overwhelming. Good thing is we learn a lot, bad thing is that
       | if often slows down our decision making.
       | 
       | > 4. I tried to put a schedule to use it - but when everybody has
       | access to this tech, I have a genuine fear of missing out.
       | 
       | FOMO is real, but like most fears it's a waste. There is no
       | existential crisis. You are not being chased by a bear, you
       | appear to be a professional so you have steady income you know
       | where your next meal is coming from and have shelter. Your fear
       | is unwarranted, even if normal. Seek out counseling or therapy to
       | learn how to manage fear and anxiety more effectively.
        
       | jorblumesea wrote:
       | Do people really think they don't need juniors because your IDE
       | has a better auto complete? ChatGPT and LLMs are very cool but
       | I'm surprised that people think like this. It just makes your
       | juniors more productive and you can have them do more.
       | 
       | Github copilot and other tools help you scale up, not out. At the
       | end of the day teams may be smaller, but someone needs to guide
       | it.
       | 
       | I'm not sure what you do, but I can't see it for most SWE jobs.
       | These posts make me question whether people understand llms or
       | have zero quality controls at their workplace.
       | 
       | It almost feels like every day people just make up wild stories
       | that seem untrue.
        
       | Havoc wrote:
       | > Humans work 9-5 (or some schedule), but ChatGPT is available
       | always and works instantly.
       | 
       | So don't use it outside of work hours.
       | 
       | If you feel compelled to solve work problems outside of work
       | hours that isn't a ChatGPT issue. It's just vanilla workaholism
        
       | coldtea wrote:
       | > _I tried to put a schedule to use it - but when everybody has
       | access to this tech, I have a genuine fear of missing out._
       | 
       | The biggest fear should be missing out on life. Not some novel
       | tech.
        
       | Paul-Craft wrote:
       | Interesting observations. For context, it looks like you are a
       | software engineer from your comment history, is that correct?
       | 
       | I'm wondering why you're feeling the need to hire juniors because
       | of GPT-4. Is it because GPT-4 has taken up the cognitive load
       | capacity you need for mentoring juniors, or do you feel like GPT
       | "obsoletes" less experienced people?
       | 
       | I think ChatGPT's advice is on the right track. It sounds to me
       | like your experience of using it is kind of like my experience of
       | pairing with someone else of equal-ish ability: productive, but
       | draining, due to the need to constantly pay attention. If so, why
       | not treat it similarly? Most people don't pair all day every day,
       | probably because of the aforementioned cognitive load of doing
       | so.
       | 
       | Last, but not least, while this may seem obvious, you should
       | remember that you are human and not a machine. You _need_ to
       | separate yourself from this thing for at least some portion of
       | your day. The constant stress (and, yes, that dopamine rush you
       | feel when you use it _is_ a kind of stress -- stress isn 't
       | always a purely negative thing) will take its toll on you
       | eventually. That's the "burnout" you're perceiving, and the only
       | way to prevent it is to just not let it happen.
       | 
       | Take care of yourself. Socialize and interact with humans,
       | especially close friends and/or SO's as applicable. If you have a
       | pet, spend some time with them. Take a walk.
       | 
       | But, most of all, remember that GPT-x, as smart as it may appear,
       | can't actually _learn_ anything from experience. It can only
       | learn from an expensive and labor-intensive process, and once its
       | training is done, it 's frozen in time forever (modulo some fine-
       | tuning, which is essentially an extension of said labor-intensive
       | training process). And, at the end of the day, that just makes it
       | a very versatile, very expensive, and very useful tool, but a
       | tool nonetheless.
        
       | itronitron wrote:
       | Maybe buy a juicer as well? I hear that is a pretty amazing
       | technology, you just set it and forget it.
        
       | [deleted]
        
       | tetha wrote:
       | > 2. I no longer feel the need to hire juniors. This is a short-
       | term positive and maybe a long-term negative.
       | 
       | The way I view it, I don't hire juniors either. I'm much rather
       | hiring the regular admin I'll have on the team in a year or two
       | who will take over all of the mundane stuff I currently have to
       | handle. At that point, I don't have to ask ChatGPT for a fix,
       | think about the fix, implement the good parts... at that point,
       | Zabbix will just open a ticket "This is broken" and someone else
       | will take care of it.
       | 
       | That takes away real workload from me, and allows them to learn a
       | lot.
       | 
       | > 1. Humans work 9-5 (or some schedule), but ChatGPT is available
       | always and works instantly. Now, when I have some idea I want to
       | try out - I start working on it immediately with the help of AI.
       | Earlier I just used to put a note in the todo-list and stash it
       | for the next day.
       | 
       | Here, my main question would be: Why is ChatGPT special? I've
       | burned midnight oil for an employer just with boring tools like
       | terraform and a configuration management. They are paying me 9 -
       | 5, and I'll work for them most effectively during that time,
       | which at this point certainly includes ChatGPT or Copilot. But I
       | don't really see the point of putting work in for them outside of
       | office hours (and emergencies), regardless of the tools involved.
        
       | johnbellone wrote:
       | Perhaps I do not understand what you're actually using ChatGPT to
       | do, but I can't see it taking over the role of junior developers
       | anytime soon.
        
         | joe_the_user wrote:
         | Yes, the post is noticeably vague about what task the poster is
         | actually doing.
        
       | karmajunkie wrote:
       | > 1. Humans work 9-5 (or some schedule), but ChatGPT is available
       | always and works instantly. Now, when I have some idea I want to
       | try out - I start working on it immediately with the help of AI.
       | Earlier I just used to put a note in the todo-list and stash it
       | for the next day.
       | 
       | This sounds like the root of your problem, and entirely on your
       | ability to enforce boundaries (which you may or may not have set
       | for yourself). No judgment here; I think we all have struggled
       | with this at one time or another. Or, you know, constantly...
       | 
       | > 4. I tried to put a schedule to use it - but when everybody has
       | access to this tech, I have a genuine fear of missing out.
       | 
       | I definitely know that feeling. I think the likely outcome writ
       | large is that this FOMO feeling will eventually subside. The
       | economy for years has needed more developers than were available;
       | ChatGPT and friends will result in individuals being able to do
       | more and soak up demand that way instead of increasing supply.
       | The long-term negative effect of this is more likely to be
       | depressed wages instead of massive unemployment in the tech
       | sector.
       | 
       | > 5. I have zero doubt that AI is setting the bar high, and it is
       | going to take away a ton of average-joe desk jobs. GPT-4 itself
       | is quite capable and organisations are yet to embrace it.
       | 
       | Another way of looking at it is that its going to _create_ a
       | number of desk jobs, but those who can 't adapt to the tools on
       | the market will suffer in the same way that people who couldn't
       | adapt to the use of spreadsheets, word processors, etc, certainly
       | had fewer job opportunities than those who did. Some people are
       | going to get left behind, no doubt--this is why I'm in favor of a
       | robust social safety net. But even with questionable public
       | support for those people, I don't think anyone today would
       | suggest we should retreat to an economy that didn't have such
       | basic tools as spreadsheets and word processor apps today.
        
       | kfarr wrote:
       | I'm reminded of the difference between being efficient vs
       | effective. So much of the example use cases I see people --
       | including myself -- using GPT for are unimportant short-term
       | tasks that necessarily take away head space and time from long-
       | term important tasks. Those long-term important tasks are the
       | hard ones requiring existing application context where I
       | experience LLMs struggling. If we're not careful we'll get
       | DDoS'ed by the tasks that an LLM can complete at the expense of
       | other tasks. Of course this may change in the future as things
       | progress, but is my observation for now.
        
         | fatfingerd wrote:
         | I think I get you, though I've been thinking of it rather
         | differently.
         | 
         | I feel like a lot of the evergreen hype in computing is
         | framework, practices, etc, that try to break things down into a
         | system where any junior could then just fill in each piece and
         | of course this always collides with the larger context
         | problems.
         | 
         | Once you get to a certain point with such a system, either you
         | have been paying attention all along or you have no idea what
         | you've made and how to deal with a real cross cutting problem
         | and you get to the point where the systems promise is really
         | irrelevant, you succeed based on actual expertise you
         | supposedly weren't going to need.
         | 
         | With GPT-like AI around its current level, I feel like some of
         | these systems for breaking down programming projects are going
         | to face an actual test now that the junior engineers to do it
         | are some GPU costs that could be run in parallel and won't have
         | the usual heterogeneous resources problems of testing with a
         | real project team.
         | 
         | I'm not really sure if any systems will survive (or something
         | learned in the process will make a good one) but I feel like it
         | would be a proof of a holy grail that is suppose quite
         | important, and just the refutation of many systems is itself a
         | major disruption to the field.
        
       | ChrisArchitect wrote:
       | Ask HN:
        
       | al2o3cr wrote:
       | ChatGPT has the habit of throwing new knowledge back at you.
       | 
       | That's certainly ONE way to characterize its tendency to
       | hallucinate APIs and operating modes out of thin air.
       | I no longer feel the need to hire juniors.
       | 
       | You've just described how you're overworked and burning out from
       | doing too much stuff yourself. Are you sure about that absence of
       | need?
        
       | Nezteb wrote:
       | I use ChatGPT (with the paid GPT-4 model) for certain things when
       | I'm stuck. I use it to explain/rephrase concepts that I'm
       | confused about. I occasionally generate some anonymized stuff
       | like short code snippets, devops configs, boilerplate, and tests.
       | 
       | Are people using it to pump out full apps and services or what?
       | Anytime I've tried that the result quality is poor, even after
       | lengthy explanations of what I'm asking it to build. Sure,
       | sometimes it saves a little bit of time, but sometimes it also
       | wastes my time by giving me nonsense or never zoning in on what
       | I'm asking for. I don't see how people are becoming so much more
       | "productive" with this, unless they're mostly talking about stuff
       | like non-code written content.
       | 
       | It doesn't help that my company's infosec policies forbid putting
       | any proprietary data or code into these AI tools, hence why I
       | only ever ask for short snippets.
        
         | tmpz22 wrote:
         | Im interested in specific use cases too. Anecdotally I only see
         | people claim huge success or no success from these tools. Where
         | are the dirty war stories? Its been out for about a year right?
        
       ___________________________________________________________________
       (page generated 2023-08-14 23:01 UTC)