[HN Gopher] Why didn't AI "join the workforce" in 2025?
       ___________________________________________________________________
        
       Why didn't AI "join the workforce" in 2025?
        
       Author : zdw
       Score  : 41 points
       Date   : 2026-01-05 22:10 UTC (50 minutes ago)
        
 (HTM) web link (calnewport.com)
 (TXT) w3m dump (calnewport.com)
        
       | dandelionv1bes wrote:
       | The response to the Sal Khan op-ed resonated with me, along with
       | other parts of this article. Something I've been digging more
       | into is some of the figures around proposed job losses from AI. I
       | think I even posted a simulation paper last week.
       | 
       | After posting that, I came across numerous papers which critique
       | Frey & Osborne's approach, who are some of the forefathers for
       | the AI job losses figures we see banded around commonly these
       | days. One such paper is here but i can dig out others:
       | https://melbourneinstitute.unimelb.edu.au/__data/assets/pdf_...
       | 
       | It has made me very cautious around bold statements on AI - and I
       | was already at the cautious end.
        
         | Retric wrote:
         | Job losses aren't directly tied to productivity, in the short
         | term it's all about expectations. Many companies are laying
         | people off and then trying to get staff back when it doesn't
         | work. How much of this is hype and how much is sustained is
         | difficult to determine right now.
        
       | matt3210 wrote:
       | A previous company I worked for is San Francisco was very anti
       | remote, but they announced on linked in that they are ok with
       | remote engineers suddenly. It seems it's still a workers market
       | at least in SF. I'd AI could do it or even reduced head count I
       | don't think that would be the case.
        
       | senordevnyc wrote:
       | Pretty ironic that he complains about Kahn citing someone who
       | told him AI agents are capable of replacing 80% of call center
       | employees, right after quoting _Gary Marcus_ of all people,
       | claiming LLMs will never live up to the hype.
       | 
       | If you want to focus on what AI agents are actually capable of
       | today, the last person I'd pay any attention to is Marcus, who
       | has been wrong about nearly everything related to AI for years,
       | and does nothing but double down.
        
         | Jonovono wrote:
         | What has he been wrong about? He was way ahead of predicting
         | the scaling limitations, llm not making it to agi.
        
           | verdverm wrote:
           | What scaling limitations, Gemini 3 shows us that is not over
           | yet, and little brother flash is a hyper sparse, 1T parameter
           | model (aiui) that is both fast and good
        
       | jcastro wrote:
       | > In one example I cite in my article, ChatGPT Agent spends
       | fourteen minutes futilely trying to select a value from a drop-
       | down menu on a real estate website
       | 
       | Man dude, don't automate toil add an API to the website.It's
       | supposed to have one!
        
         | stvltvs wrote:
         | It probably has one that the web form is already using, but if
         | agentic AI requires specialized APIs, it's going to be a while
         | before reality meets the hype.
        
       | edfletcher_t137 wrote:
       | > But for now, I want to emphasize a broader point: I'm hoping
       | 2026 will be the year we stop caring about what people believe AI
       | might do, and instead start reacting to its real, present
       | capabilities.
       | 
       | > So, this is how I'm thinking about AI in 2026. Enough of the
       | predictions. I'm done reacting to hypotheticals propped up by
       | vibes. The impacts of the technologies that already exist are
       | already more than enough to concern us for now...
       | 
       | SPOT ON, let us all take inspiration. "The impacts of the
       | technologies that already exist are already more than enough to
       | concern us for now"!
        
       | doctorpangloss wrote:
       | Cal Newport looked in the wrong places. He has no visibility into
       | the usage of ChatGPT to do homework. The collapse of Chegg should
       | tell you, with no other public information, that if 30% of
       | students were already cheating somehow, somewhat weakly, they are
       | now doing super-powerful cheating, and surely more than 30% of
       | students at this stage.
       | 
       | It's also kind of stupid to hand wave away, programming.
       | Programmers are where all the early adopters of software are.
       | He's merely conflating an adoption curve with capabilities.
       | Programmers, I'm sure, were also the first to use Google and
       | smartphones. "It doesn't work for me" is missing the critical
       | word "yet" at the end, and really, is it saying much that
       | forecasts about adoption in the metric, "years until when Cal
       | Newport's arbitrary criteria of what agent and adoption means
       | meets some threshold only inside Cal Newport's head" is hard to
       | do?
       | 
       | There are 700m active monthlies for ChatGPT. It has joined the
       | workforce! It just isn't being paid the salaries.
        
         | bpavuk wrote:
         | read it again. he criticizes the hype built around 2025 as the
         | Year X for agents. many were thinking that "we'll carry PCs in
         | our pockets" when Windows Mobile-powered devices came out. many
         | predicted 2003 as the Year X for what we now call smartphones.
         | 
         | no, it was 2008, with the iPhone launch.
        
         | lukev wrote:
         | Wow, homework is an insane example of a "workforce."
         | 
         | Homework is in some ways the opposite of actual economic labor.
         | Students _pay_ to attend school, and homework is
         | (theoretically) part of that education; something designed to
         | help students learn more effectively. They are most certainly
         | not paid for it.
         | 
         | Having a LLM do that "work" is economically insane. The desired
         | learning does not happen, and the labor of grading and giving
         | feedback is entirely wasted.
         | 
         | Students use ChatGPT for it because of perverse incentives of
         | the educational system. It has no bearing on economic
         | production of value.
        
       | bpavuk wrote:
       | a stellar piece, Cal, as always. short and straight to the point.
       | 
       | I believe that Codex and the likes took off (in comparison to
       | e.g. "AI" browsers) because the bottleneck there was not
       | reasoning about code, it was about typing and processing walls of
       | text. for a human, the interface of e.g. Google Calendar is +-
       | intuitive. for a LLM, any graphical experience is an absolute
       | hellscape from performance standpoint.
       | 
       | CLI tools, which LLMs love to use, output text and only text, not
       | images, not audio, not videos. LLMs excel at text, hence they are
       | confined to what text can do. yes, multimodal is a thing, but you
       | lose a lot of information and/or context window space + speed.
       | 
       | LLMs are a flawed technology for general, true agents. 99% of the
       | time, outside code, you need eyes and ears. we have only created
       | a self-writing paper yet.
        
       | observationist wrote:
       | I've seen organizations where 300 of 500 people could effectively
       | be replaced by AI, just by having some of the the remaining 200
       | orchestrate and manage automation workflows that are trivially
       | within the capabilities of current frontier models.
       | 
       | There's a whole lot of bullshit jobs and work that will get
       | increasingly and opaquely automated by AI. You won't see jobs go
       | away unless or until organizations deliberately set out to reduce
       | staff. People will use AI throughout the course of their days to
       | get a couple of "hours" of tasks done in a few minutes, here and
       | there, throughout the week. I've already seen reports and
       | projects and writing that clearly comes from AI in my own
       | workplace. Right now, very few people know how to recognize and
       | assess the difference between human and AI output, and even fewer
       | how to calibrate work assignments.
       | 
       | Spreadsheet AIs are fantastic, reports and charting have just hit
       | their stride, and a whole lot of people are going to appear to be
       | very productive without putting a whole lot of effort into it.
       | And then one day, when sufficiently knowledgable and aware people
       | make it into management, all sorts of jobs are going to go
       | quietly away, until everything is automated, because it doesn't
       | make sense to pay a human 6 figures what an AI can do for 3
       | figures in a year.
       | 
       | I'd love to see every manager in the world start charting the
       | Pareto curves for their workplaces, in alongside actual hours
       | worked per employee - work output is going to be very wonky, and
       | the lazy, clever, and ambitious people are all going to be using
       | AI very heavily.
       | 
       | Similar to this guy:
       | https://news.ycombinator.com/item?id=11850241
       | 
       | https://www.reddit.com/r/BestofRedditorUpdates/comments/tm8m...
       | 
       | Part of the problem is that people don't know how to measure work
       | effectively to begin with, let alone in the context of AI
       | chatbots that can effectively do better work than anyone a
       | significant portion of the adult population of the planet.
       | 
       | The teams that fully embrace it, use the tools openly and
       | transparently, and are able to effectively contrast good and poor
       | use of the tools, will take off.
        
       | milancurcic wrote:
       | > The industry had reason to be optimistic that 2025 would prove
       | pivotal. In previous years, AI agents like Claude Code and
       | OpenAI's Codex had become impressively adept at tackling multi-
       | step computer programming problems.
       | 
       | Both of these agents launched mid-2025.
        
         | bpavuk wrote:
         | don't forget Aider from 2023
        
       | evil-olive wrote:
       | > But for now, I want to emphasize a broader point: I'm hoping
       | 2026 will be the year we stop caring about what people believe AI
       | might do, and instead start reacting to its real, present
       | capabilities.
       | 
       | yes, 100%
       | 
       | I think that way too often, discussions of the _current_ state of
       | tech get derailed by talking about _predictions_ of future
       | improvements.
       | 
       | hypothetical thought experiment:
       | 
       | I set a New Year's resolution for myself of drinking less
       | alcohol.
       | 
       | on New Year's Eve, I get pulled over for driving drunk.
       | 
       | the officer wants to give me a sobriety test. I respond that I
       | have _projected_ my alcohol consumption will have decreased 80%
       | YoY by Q2 2026.
       | 
       | the officer is going to smile and nod...and then _insist_ on
       | giving me the sobriety test.
       | 
       | compare this with a non-hypothetical anecdote:
       | 
       | I was talking with a friend about the environmental impacts of
       | AI, and mentioned the methane turbines in Memphis [0] that are
       | being used to power Elon Musk's MechaHitler slash CSAM generator.
       | 
       | the friend says "oh, but they're working on building nuclear
       | power plants for AI datacenters".
       | 
       | and that's technically true...but it misses the broader point.
       | 
       | if someone lives downwind of that data center, and they have a
       | kid who develops asthma, you can try to tell them "oh in 5 years
       | it'll be nuclear powered". and your prediction might be
       | correct...but their kid still has asthma.
       | 
       | 0: https://time.com/7308925/elon-musk-memphis-ai-data-center/
        
       ___________________________________________________________________
       (page generated 2026-01-05 23:00 UTC)