[HN Gopher] Andrew Ng: Building Faster with AI [video]
       ___________________________________________________________________
        
       Andrew Ng: Building Faster with AI [video]
        
       Author : sandslash
       Score  : 283 points
       Date   : 2025-07-10 14:02 UTC (2 days ago)
        
 (HTM) web link (www.youtube.com)
 (TXT) w3m dump (www.youtube.com)
        
       | bgwalter wrote:
       | [flagged]
        
         | reactordev wrote:
         | He doesn't have to at this point, he just throws money at
         | younger ones that will build it.
         | 
         | I want an Andrew Ng Agent.
        
           | Bluestein wrote:
           | ... in essence, an "A-Ngent".-
           | 
           | (I'll see myself out ...)
        
           | arkmm wrote:
           | Not affiliated, but someone's already working on that for
           | you: https://www.realavatar.ai/
        
           | reactordev wrote:
           | I'm serious, the man's a genius...
        
         | hoegarden wrote:
         | Baidu.
        
           | bgwalter wrote:
           | The video's description is about _building_ startups through
           | vibe coding, not _using_ "AI" like self-driving or chatbots
           | in startups.
           | 
           | Additionally, Baidu wasn't a startup when he joined in 2014.
        
             | hoegarden wrote:
             | Ng built baidu's AI department and began their start in
             | various sectors with actual AI system design, so yes, he
             | isn't a failed startup entrepreneur like any vibe startup
             | maker who already wants to stop and give advice.
             | 
             | Maybe you can help me hire a vibe coder with 10 years
             | experience?
        
               | bgwalter wrote:
               | He built it _without_ LLMs in 2014 and now he is selling
               | LLMs for coding to the young. That is the entire point of
               | this subthread.
        
               | hoegarden wrote:
               | Right.. He's just a giant, not a midget with a step
               | ladder.
               | 
               | But I do question why anyone who played a significant
               | role in the foundation of the current AI generation would
               | teach an obvious new Zuckerberg generation who will
               | apparently think they are the start of everything if they
               | get a style working in the prompt.
               | 
               | If not for 3 people in 2012, I find it highly unlikely a
               | venture like OpenAI could have occurred and without Ng in
               | particular I wouldn't be surprised if the field would
               | have been missing a few technical pieces as well as the
               | hire-able engineers.
        
         | crystal_revenge wrote:
         | A good chunk of Ng's work these days seems to be around AI Fund
         | [0] which he explicitly mentioned in the video, in the first 5
         | seconds, involves co-founding these startups and being in the
         | weeds with the initial development.
         | 
         | Additionally, he does engage pretty closely with the teams
         | behind the content of his deeplearning.ai lectures and does
         | make sure he has a deep understanding of the products these
         | companies are highlighting.
         | 
         | He certainly is a businessman, but that doesn't exlcudethe
         | possibility that he remains highly knowledgeable about this
         | space.
        
           | dcreater wrote:
           | He's lost credibility in my eyes given that his courses
           | essentially have a pay to play model for startups like
           | langchain
        
             | crystal_revenge wrote:
             | Except they _aren 't_ pay to play unless you consider doing
             | the work for the course the "payment". There's certainly an
             | exchange since there is a lot of work involved, but DLAI
             | provides a team to help design, structure and polish the
             | course and then the team creating the course does the
             | majority of the work creating the content, but there's no
             | financial exchange.
             | 
             | The DLAI team is also pretty good about ensuring the
             | content covers a topic not a product in general.
        
               | dcreater wrote:
               | The content is a repackage of previously existing,
               | publicly available notebooks, docs, YouTube videos. I
               | wouldnt be surprised if the repackaging was done by AI.
        
               | raincole wrote:
               | Courses are not academic journals, dude. They're supposed
               | to be teaching you existing knowledge.
        
               | crystal_revenge wrote:
               | Again this is not true. I've known several people who
               | have made courses for DLAI and they all put substantial
               | time into creating the courses.
        
         | whattheheckheck wrote:
         | He literally builds companies and hires ceos to run them Google
         | it
        
           | melenaboija wrote:
           | > He literally builds companies
           | 
           | Like with actual mortar, brick by brick?
        
       | mrbonner wrote:
       | You become a millionaire by selling books (courses) of how to
       | become millionaire to others.
        
         | rubslopes wrote:
         | Relevant video
         | https://youtu.be/CWMAOzH20mY?si=Kr8vp1vo_PpRNJ8-&utm_source=...
        
           | Koshcheiushko wrote:
           | Thanks
        
       | DataDaemon wrote:
       | when there is a gold rush, just sell courses how to mine gold
        
         | azan_ wrote:
         | He sold courses (great ones!) long before there was AI-gold
         | rush. He's one of the OG players in online education and I
         | think he deserves praise, not blame for that.
        
       | w10-1 wrote:
       | Not sure why this has drawn silence and attacks - whence the
       | animus to Ng? His high-level assessments seem accurate, he's a
       | reasonable champion of AI, and he speaks credibly based on
       | advising many companies. What am I missing? (He does fall on the
       | side of open models (as input factors): is that the threat?)
       | 
       | He argues that landscape is changing (at least quarterly), and
       | that services are (best) replaceable (often week-to-week) because
       | models change, but that orchestration is harder to replace, and
       | that there are relatively few orchestration platforms.
       | 
       | So: what platforms are available? Are there other HN posts that
       | assess the current state of AI orchestration?
       | 
       | (What's the AI-orchestration acronym? not PAAS but AIOPAAS? AOP?
       | (since aspect-oriented programming is history))
        
         | handfuloflight wrote:
         | We've defined agents. Let's now define orchestration.
        
           | ramraj07 wrote:
           | Bold claim. I am not convinced anyone's done a good job
           | defining agents and if they did 99% of the population has a
           | different interpretation.
        
             | handfuloflight wrote:
             | Okay. We've tried to define agents. Now let's try to define
             | orchestration.
        
               | lhuser123 wrote:
               | And make it more complicated than K8s
        
               | jliptzin wrote:
               | Not possible
        
               | vajrabum wrote:
               | The platforms I've seen live on top of kubernetes so I'm
               | afraid it is possible. nvidia-docker, all the cuda
               | libraries and drivers, nccl, vllm,... Large scale
               | distributed training and inference are complicated
               | beasties and the orchestration for them is too.
        
         | stego-tech wrote:
         | > So: what platforms are available?
         | 
         | I couldn't tell you, but what I _can_ contribute to that
         | discussion is that orchestration of AI in its current form
         | would focus on one of two approaches: consistent output despite
         | the non-deterministic state of LLMs, or consistent inputs that
         | leans into the non-deterministic state of LLMs. The problem
         | with the former (output) is that you cannot guarantee the
         | output of an AI on a consistent basis, so a lot of the
         | "orchestration" of outputs is largely just brute-forcing tokens
         | until you get an answer within that acceptable range; think the
         | glut of recent "Show HN" stuff where folks built a slop-app by
         | having agents bang rocks together until the code worked.
         | 
         | On the input side of things, orchestration is less about AI
         | itself and more about ensuring your data and tooling is
         | consistently and predictably accessible to the AI such that the
         | output is similarly predictable or consistent. If you ask an AI
         | what 2+2 is a hundred _different_ ways, you increase the
         | likelihood of hallucinations; on the other hand, ensuring the
         | agent /bot gets the same prompt with the same data formats and
         | same desired outputs every single time makes it more likely
         | that it'll stay on task and not make shit up.
         | 
         | My engagement with AI has been more of the input-side, since
         | that's scalable with existing tooling and skillsets in the
         | marketplace instead of the output side, which requires niche
         | expertise in deep learning, machine learning, model training
         | and fine-tuning, etc. In other words, one set of skills is
         | cheaper and more plentiful while also having impacts throughout
         | the organization (because _everyone_ benefits from consistent
         | processes and clean datasets), while the other is incredibly
         | expensive and hard to come by with minimal impacts elsewhere
         | unless a profound revolution is achieved.
         | 
         | One thing to note is that Dr. Ng gives the game away at the Q&A
         | portion fairly early on: "In the future, the people who are the
         | most powerful are the people who can make computers do exactly
         | what you want it to do." In that context, the current AI slop
         | is antithetical to what he's pitching. Sure, AI can improve
         | speed on execution, prototyping, and rote processes, but the
         | real power remains in the hands of those who can build with
         | precision instead of brute-force. As we continue to hit
         | barriers in the physical capabilities of modern hardware and
         | wrestle with the effects of climate change and/or poor energy
         | policies, efficiency and precision will gradually become more
         | important than speed - at least that's my thinking.
        
           | handfuloflight wrote:
           | This is great thinking, thank you for writing this.
        
           | vlovich123 wrote:
           | > The problem with the former (output) is that you cannot
           | guarantee the output of an AI on a consistent basis
           | 
           | Do you mean you cannot guarantee the result based on a task
           | request with a random query? Or something else? I was under
           | the impression that LLMs are very deterministic if you
           | provide a fixed seed for the samplers, fixed model weights,
           | and fixed context. In cloud providers you can't guarantee
           | this because of how they implement this (batching unrelated
           | requests together and doing math). Now you can't guarantee
           | the quality of the result from that and changing the seed or
           | context can result in drastically different quality. But
           | maybe you really mean non-deterministic but I'm curious where
           | this non-determinism would come from.
        
             | stego-tech wrote:
             | > I was under the impression that LLMs are very
             | deterministic if you provide a fixed seed for the samplers,
             | fixed model weights, and fixed context.
             | 
             | That's all input-side, though. On the output side, you can
             | essentially give an LLM anxiety by asking the exact same
             | question in different ways, and the machine doesn't
             | understand anymore that you're asking _the exact same
             | question_.
             | 
             | For instance, take one of these fancy "reasoning" models
             | and ask it variations on 2+2. Try two plus two, 2 plus two,
             | deux plus 2, TwO pLuS 2, etc, and observe its "reasoning"
             | outputs to see the knots it ties itself up in trying to
             | understand why you keep asking the same calculation over
             | and over again. Running an older DeepSeek model locally,
             | the "reasoning" portion continued growing in time and
             | tokens as it struggled to provide context that didn't exist
             | to a simple problem that older/pre-AI models wouldn't bat
             | an eye at and spit out "4".
             | 
             | Trying to wrangle consistent, reproducible outputs from
             | LLMs without guaranteeing consistent inputs is a fool's
             | errand.
        
               | vlovich123 wrote:
               | Ok yes. I call that robustness of the model as opposed to
               | determinism which to me implies different properties. And
               | yes, I too have been frustrated by the lack of robustness
               | of models to minor variations in input or even using a
               | different seed for the same input.
        
             | contrast wrote:
             | Pointing out that LLMs are deterministic as long as you
             | lock down everything, is like saying an extra bouncy ball
             | doesn't bounce if you leave it on flat surface, reduce the
             | temperature to absolute zero, and make sure the surface and
             | the ball are at rest before starting the experiment.
             | 
             | It's true but irrelevant.
             | 
             | One of the GP's main points was that even the simplest
             | questions can lead to hundreds of different contexts; they
             | probably already know that you could get different outcomes
             | if you could instead have a fixed context.
        
           | void-star wrote:
           | Really valid points. I agree with the bits about "expertise
           | in getting the computer to do what you want" being the way of
           | the future, but he also raises really valid points about
           | people having strong domain knowledge (a la his colleague
           | with extensive art history knowledge being better at
           | midjourney than him) after saying it's okay to tell people to
           | just let the LLM write code for you and learn to code that
           | way. I am having a hard time with the contradictions, maybe
           | it's me. Not meaning to rag on Dr. Ng, just further the
           | conversation. (Which is super interesting to me.)
           | 
           | EDIT: rereading and realizing I think what resonates most is
           | we are in agreement about the antithetical aspects of the
           | talk. I think this is the crux of the issue.
        
         | lubujackson wrote:
         | I'm guessing because this is basically an AI for Dummies
         | overview, while half of HN is deep in the weeds with AI
         | already. Nothing wrong with the talk! Except his focus on "do
         | everything" agents already feels a bit stale as the move seems
         | to be going in the direction of limited agents with a much
         | stronger focus on orchestration of tools and context.
        
           | hakanderyal wrote:
           | From the recent threads, it feels like the other half is
           | totally, willfully ignorant. Hence the responses.
        
             | rhizome31 wrote:
             | As someone who is part of that other half, I agree.
        
           | davorak wrote:
           | > I'm guessing because this is basically an AI for Dummies
           | 
           | I second this, for the silence at least, I listened to the
           | talk because it was Andrew Ng and it is good or at least fun
           | to listen to talks by famous people, but I did not walk away
           | with any new key insights, which is fine, most talks are not
           | that.
        
           | fullstackchris wrote:
           | > deep in the weeds with AI already
           | 
           | I doubt even 10% have written a custom MCP tool... and
           | probably some who don't even know what that means
        
         | jart wrote:
         | I like Andrew Ng. He's like the Mister Rogers of AI. I always
         | listen when he has something to say.
        
           | koakuma-chan wrote:
           | Is he affiliated with nghttp?
        
             | dmoy wrote:
             | No?
             | 
             | ng*, ng-*, or *-ng is typically "Next Generation" in
             | software nomenclature. Or, star trek (TNG). Alternatively,
             | "ng-" is also from angular-js.
             | 
             | Ng in Andrew Ng is just his name, like Wu in Chinese.
        
               | janderson215 wrote:
               | Wu from Wu-Tang?
        
               | yorwba wrote:
               | No, Wu-Tang ultimately derives from the Wudang Mountains,
               | with the corresponding Cantonese being Moudong https://en
               | .wiktionary.org/wiki/%E6%AD%A6%E7%95%B6%E5%B1%B1
        
               | 57473m3n7Fur7h3 wrote:
               | And between that and the rap group there's this important
               | movie:
               | 
               | Shaolin and Wu Tang (1983)
               | 
               | > The film is about the rivalry between the Shaolin (East
               | Asian Mahayana) and Wu-Tang (Taoist Religion) martial
               | arts schools. [...]
               | 
               | > East Coast hip-hop group Wu-Tang Clan has cited the
               | film as an early inspiration. The film is one of Wu-Tang
               | Clan founder RZA's favorite films of all time. Founders
               | RZA and Ol' Dirty Bastard first saw the film in 1992 in a
               | grindhouse cinema on Manhattan's 42nd Street and would
               | found the group shortly after with GZA. The group would
               | release its debut album Enter the Wu-Tang (36 Chambers),
               | featuring samples from the film's English dub; the
               | album's namesake is an amalgamation of Enter the Dragon
               | (1973), Shaolin and Wu Tang, and The 36th Chamber of
               | Shaolin (1978).
               | 
               | https://en.wikipedia.org/wiki/Shaolin_and_Wu_Tang
        
               | dmoy wrote:
               | Yea haha the chinese-to-english gets confusing, because
               | it's not a 1:1, it's an N:1 thing, for the number
               | different Chinese languages, different tones, and semi-
               | malicious US immigration agents who botched the shit out
               | of people's names in the late 19th and early 20th
               | century.
               | 
               | Wu and Ng in Mandarin and Cantonese may be the same
               | character. But Wu the common surname and Wu for some
               | other thing (e.g. that mountain) may be different
               | characters entirely.
               | 
               | It gets even more confusing when you throw a third
               | Chinese language in, say Taishanese:
               | 
               | Wu = Ng (typically) for Mandarin and Cantonese et al. But
               | if it's someone who went to America earlier, suddenly
               | it's Woo. But even though they're both yue Chinese
               | languages, Woo != Woo in Cantonese and Taishanese. For
               | that name, it's Hu (Mandarin) = Wu / Wuh (Cantonese) =
               | Woo (Taishanese, in America). Sometimes. Lol. Sometimes
               | not.
               | 
               | Similarly, Mei = Mai = Moy
        
           | mnky9800n wrote:
           | And he's been doing it forever and all from the original idea
           | that he could offer a Stanford education on ai for free on
           | the Internet thus he created coursera. The dude is cool.
        
         | tomrod wrote:
         | No need to add AI to the name, especially if it works. PaaS and
         | IaaS are sufficient.
        
         | lloeki wrote:
         | > AOP? (since aspect-oriented programming is history)
         | 
         | AOP is very much alive, people that do AOP have just forgotten
         | what the name is, and many have simply reinvented it poorly.
        
           | nivertech wrote:
           | AOP always felt like a hack. I used it with C++ early on, and
           | it was a preprocessor inserting ("weaving") aspects in the
           | function entries/exits. Mostly was useful for logging. But
           | that can be somewhat emulated using C++
           | constructors/destructors.
           | 
           | Maybe it can be also useful for DbC (Design-by-Contract) when
           | sets of functions/methods have common pre/post-conditions
           | and/or invariants.
           | 
           | https://en.wikipedia.org/wiki/Aspect-
           | oriented_programming#Cr...
        
           | alex_smart wrote:
           | Also very much alive and called that in the Java/Spring
           | ecosystem
        
       | pchristensen wrote:
       | I have had reservation about Ng from a lot of his past hype, but
       | I thought this talk was extremely practical and tactical. I
       | recommend watching it before passing judgement.
        
       | croes wrote:
       | I haven't watched the video yet, but title does sound like
       | quantity over quality.
       | 
       | Why faster and not better with AI?
        
         | pinkmuffinere wrote:
         | I think this is an interesting question, and I'd like to
         | genuinely attempt an answer.
         | 
         | I essentially think this is because people prefer to optimize
         | what they can measure.
         | 
         | It is hard to measure the quality of work. People have
         | subjective opinions, the size of opportunities can be
         | different, etc, making quality hard to pin down. It is much
         | easier to measure the time required for each iteration on a
         | concept. Additionally, I think it is generally believed that a
         | project with more iterations tends to have higher quality than
         | a project with less, even putting aside the concern about
         | measuring quality itself. Therefore, we put aside the
         | discussion of quality (which we'd really like to improve), and
         | instead make the claim that we can actually measure (time to do
         | something), with the strong implication that this _also_ will
         | tend to increase quality.
        
           | croes wrote:
           | I think speed isn't our problem.
           | 
           | Most of the time the problem it's quality but everyone only
           | seems eager to ship as fast as possible.
           | 
           | Move fast and break things already happened and now we are
           | adding more speed.
           | 
           | ,,Your scientists were so preoccupied with whether they
           | could, they didn't stop to think if they should."
           | 
           | Or for the more sophisticated
           | 
           | https://en.wikipedia.org/wiki/The_Physicists
           | 
           | Energy consumption and data protection were a thing and then
           | came AI and all of a sudden it doesn't matter anymore.
           | 
           | Between all the good things people create with AI I see a lot
           | more useless or even harmful things. Scams and fake news get
           | better and harder to distinguish to a point where reality
           | doesn't matter anymore.
        
         | markerz wrote:
         | I think quality takes time and refinement which is not
         | something that LLMs have solved very well today. They are very
         | okay at it, except for very specific targeted refinements
         | (Grammerly, SQL editors).
         | 
         | However, they are excellent at building from 0->1, and the
         | video is suggesting that this is perfect for startups. In the
         | context of startups, faster is better.
        
           | croes wrote:
           | Depends on the startup. For medical or financial things
           | faster isn't better.
           | 
           | DOGE acts like a startup and we all fear the damage.
           | 
           | I would prefer better startups over faster at anytime.
           | 
           | Now I fear AI will just make the haystack bigger and the
           | needles harder to find.
           | 
           | Same with artists, writers, musicians. They drown in the
           | flood of the AI created masses.
        
       | androng wrote:
       | https://toolong.link/v?w=RNJCfif1dPY&l=en
        
       | Keyframe wrote:
       | strong MLM energy vibe in that talk.
        
       | imranq wrote:
       | My two takeaways is you build 1) Having a precise vision of what
       | you want to achieve 2) Being able to control / steer AI towards
       | that vision
       | 
       | Teams that can do both of these things, especially #1 will move
       | much faster. Even if they are wrong its better than vague ideas
       | that get applause but not customers
        
         | void-star wrote:
         | Yes this! The observation that being specific versus general in
         | the problems you want to solve is a better start-up plan is
         | true for all startups ever, not just ones that use LLMs to
         | solve them. Anecdotal/personal startup experiences support this
         | strongly and I read enough on here to know that I am not
         | alone...
        
           | techpineapple wrote:
           | What's the balance between being specific in a way that's
           | positive and allows you to solve good problems, and not
           | getting pigeonhold and not being able to pivot? I wonder if
           | companies who pivot are the norm or if you just here of the
           | most popular cases.
        
       | skipants wrote:
       | I'm 20 minutes into the video and it does seem mostly basic and
       | agreeable.
       | 
       | Two arguments from Ng that really stuck out that is really
       | tripping my skepticism alarm are:
       | 
       | 1) He mentions how fast prototyping has begun because generating
       | a simple app has become easier with AI. This, to me, has always
       | been quick and never the bottleneck for any company I've been at,
       | including startups. Validating an idea was simple enough via
       | wireframing. I can maybe see it for selling an idea where you
       | need some amount of fidelity yo impress potential investors...
       | but I would hope places like YC can see the tech behind the idea
       | without seeing the tech itself. Or at least can ignore low
       | fidelity if a prototype shows the meat of the product.
       | 
       | 2) Ng talks about how everyone in his company codes, from the
       | front desk to the executives. The "everyone should code" idea has
       | been done and shown to fail for the past 15 years. In fact I've
       | seen it be more damaging than helpful because it gave people
       | false confidence that they could tell engineers how to do their
       | job rather than a more empathetic understanding.
        
         | apwell23 wrote:
         | even prototyping hasn't become "fast" because you cannot purely
         | vibecode even a prototype.
        
         | marcosdumay wrote:
         | On point 1, it's worse than that. Adding detail and veracity to
         | a prototype is well known to bring negative value.
         | 
         | Prototypes must be exactly a sketchy as the ideas they
         | represent, otherwise they mislead people into thinking the
         | software is built and your ideas can't be changed.
        
           | macNchz wrote:
           | I've always said this as well, having done lots and lots of
           | early stage building and prototyping, and suffering plenty of
           | proto-duction foibles, however my view has shifted on this a
           | lot in the last year or so.
           | 
           | With current models I'm able to throw together fully working
           | web app prototypes so quickly and iterate often-sweeping UI
           | and architectural changes so readily that I'm finding it has
           | changed my whole workflow. The idea of trying to keep things
           | low-fidelity at the start is predicated on the understanding
           | that changes later in the process are much more difficult or
           | expensive, which I think is increasingly no longer the case
           | in many circumstances. Having a completely working prototype
           | and then totally changing how it works in just a few
           | sentences is really quite something.
           | 
           | The key to sustainability in this pattern, in my opinion, is
           | not letting the AI dictate project structure or get too far
           | ahead of your own understanding/oversight of the general
           | architecture. That's a balancing act to be sure, since purely
           | vibe-coding is awfully tempting, but it's still far too easy
           | to wind up with a big ball of wax that neither human nor AI
           | can further improve.
        
         | mteoharov wrote:
         | At my company everybody codes, including PMs and business
         | people. It can definitely be damaging done in the long run
         | without any supervision from an actual programmer. This is why
         | we assign an engineer to review every PR of a vibe coded
         | project and they don't really need all of the context to detect
         | bs approaches that will surely fail.
         | 
         | About prototyping - its much faster and i dont know how anyone
         | can argue this. PMs can get a full blown prototype for an MVP
         | working in a day with AI assistance. Sure - they will be thrown
         | in the trash after the demo, but they carry out their purpose
         | of proving a concept. The code is janky but it works for its
         | purpose.
        
           | willahmad wrote:
           | > This is why we assign an engineer to review every PR of a
           | vibe coded project and they don't really need all of the
           | context to detect bs approaches that will surely fail.
           | 
           | I see this trend in many companies as well, just curious, how
           | do you make sure engineering time is not wasted reviewing so
           | many PRs? Because, some of them will be good, you only need
           | couple of your bets to take off, some definitely bad
        
           | sensanaty wrote:
           | Good lord I think I'd rather eat a shotgun than be forced to
           | review a billion garbage PRs made by PMs and other non-
           | technical colleagues. It's bad enough reviewing PRs
           | backenders write for FE features badly with AI (and vice
           | versa), I cannot even imagine the pits of hell this crap is
           | like.
           | 
           | What happens when inevitably the PR/code is horrid? Do they
           | just keep prompting and pushing out slop that some poor
           | overworked dev is now forced to sit through lest he get PIP'd
           | for not being brainwashed by LLMs yet?
        
         | torginus wrote:
         | The "everyone should code" idea has been done and shown to fail
         | for the past 15 years - I pretty much completely agree, and
         | this idea shows the outsized importance on programming as some
         | kind of inherently superior activity, and bringing the ability
         | to program to the masses as some kind of ultimate good.
         | 
         | If you've worked long enough and had interacted with people
         | with varied skillsets, people who don't code aren't only there
         | for show, in fact, depending on the type of company you work
         | at, their jobs might be genuinely more important for the
         | company's success than yours.
        
       | nikolayasdf123 wrote:
       | not a single word about overwhelming replacement of humans with
       | AI. nothing about countless jobs lost. nothing about ever
       | increasing competition and rat-race. (speaking of software, but
       | applies to all industries). his rose-glasses view is somewhere in
       | between optimism-in-denial to straight-up lunacy. if this is the
       | leader(s) we have been following, this should be a wake up call.
        
         | lbrito wrote:
         | How dare you insinuate that there might be negatives in a new
         | technology. Outrageous. AI good.
        
       | mehulashah wrote:
       | This talk is deceptively simple. The most sage advice that
       | founders routinely forget is what concrete idea are you going to
       | implement and why do you think it will work? There has be a way
       | to invalidate your idea and as a corollary you must have the
       | focus to collect the data and properly invalidate it.
        
         | nextworddev wrote:
         | Hey Mehul, crossed paths with you at AWS. Good to see you are
         | doing your own thing now. We could connect sometime
        
       | cachecrab wrote:
       | 1 product manager to 0.5 engineers for a project? That seems...
       | off.
        
       | sensanaty wrote:
       | > 1 product manager to 0.5 engineers
       | 
       | I would love to have access to whatever this guy is smoking,
       | cause that is some grade-A mind rotted insanity right there. I
       | can count on half of 1 hand the number of good PMs I've had
       | trough my career who weren't a net negative on the
       | projects/companies, and even they most definitely cannot build
       | jackshit by throwing a bunch of LLM-hallucinated crap at the wall
       | and seeing what sticks.
       | 
       | But sure, the devs are the ones that are going to be replaced by
       | the clueless middle managers who only exist to waste everyone's
       | time.
        
         | macawfish wrote:
         | Or is it the other way around? Project managers who can't
         | actually competently execute won't be able to hang?
         | 
         | In the end, what if technically sharp designers and well
         | rounded developers actually end up pushing out incompetent
         | managers?
         | 
         | Could be wishful thinking but you never know.
        
           | macawfish wrote:
           | Case in point: https://old.reddit.com/r/ProductManagement/com
           | ments/1lw9r9h/...
           | 
           | (the comments are especially revealing)
        
         | uses wrote:
         | he's saying that the productivity of devs is increasing so
         | much, especially during the prototyping phase, that gathering
         | feedback is becoming the bottleneck, hence there is more PM
         | labor needed. he didn't say anything about reducing the
         | quantity of dev labor needed.
        
       ___________________________________________________________________
       (page generated 2025-07-12 23:01 UTC)