[HN Gopher] Firing programmers for AI is a mistake
___________________________________________________________________
Firing programmers for AI is a mistake
Author : frag
Score : 586 points
Date : 2025-02-11 09:42 UTC (13 hours ago)
(HTM) web link (defragzone.substack.com)
(TXT) w3m dump (defragzone.substack.com)
| cranberryturkey wrote:
| you still need a programmer
| md5crypto wrote:
| Do we?
| bryukh wrote:
| "Let AI replace programmers" is the new "Let's outsource
| everything to <some country>." Short-term cost savings, long-term
| disaster.
| outime wrote:
| Has there been any company so far that has fired 100% of their
| programmers to replace them with AI?
| UndefinedRef wrote:
| Didnt fire anyone per se, but I am a solo developer and I can
| do the work of 2 people now
|
| Edit: I am a solo developer and I have to work half the time
| only now.
| hassleblad23 wrote:
| Could have been "I am a solo developer and I have to work
| half the time only now."
| finnjohnsen2 wrote:
| Why does this never happen? :(
| kachhalimbu wrote:
| Because work expands to fill the time. You are more
| efficient at work and get more done? Awesome, now you
| have more responsibility.
| UndefinedRef wrote:
| I like that better
| monsieurbanana wrote:
| There's no serious sources about people wanting to fire 100% of
| [insert title here] for LLMs. It's more about reducing head-
| count by leveraging LLMs as a productivity multiplier.
|
| I haven't heard of companies successfully doing that at scale
| though.
| finnjohnsen2 wrote:
| The trick is to call the people who are using the AI to
| generate code something other than programmers.
| alkonaut wrote:
| Has there been any company that has laid off even a nontrivial
| amount of programmers and replaced them with AI? Here I mean,
| where developers at said company actually say the process works
| and is established, and the staff cuts weren't happening
| anyway.
|
| I know there are CEOs that make bold claims about this (E.g.
| Klarna) but I don't really assign any value to that until I
| hear from people on the floor.
| kkapelon wrote:
| BT news suggest they made announcement to cut jobs back in
| 2023 and they are actually doing it now
|
| https://www.socialistparty.org.uk/articles/133443/03-12-2024.
| ..
|
| https://www.forbes.com/sites/siladityaray/2023/05/18/telecom.
| ..
| kypro wrote:
| Small companies, yes, absolutely.
|
| If you have a small non-tech company with a website you pay a
| freelance programmer to maintain you should seriously consider
| replacing your programmer with AI.
|
| I work for a company which among other things provides
| technical support for a number of small tech-oriented
| businesses and we have lot of problems right now with clients
| trying to do things on their own with the help of AI.
|
| In our case the complexity of some of these projects and the
| limited ability of AI means that they're typically creating
| more bugs and tech debt for us to fix and are not really saving
| themselves any time - and this is certainly going to be true at
| the moment for any large project. However, if you're paying
| programmers just to manage the content of a few small websites
| it probably begins to make sense to use AI instead.
| penetrarthur wrote:
| This still implies that the person who is currently paying
| freelance programmers is 1) good with LLMs 2) knows some html
| and js 3) can deploy the updated website.
| kypro wrote:
| You're probably right that these people still need some
| baseline technical skills currently, but I'm really not
| assuming anything here - this is something we've seen
| multiple of our clients do in recent months.
|
| It's funny you say they need to be able to deploy the
| update to be honest because we had a client just last week
| email a collect of code snippets to us which they created
| with the help of AI.
|
| This is the problem we have though because we're not just
| building simple websites which we can hand clients FTP
| creds for. The best we can do is advise them to learn Git
| and raise a PR which we can review and deploy ourselves.
| monsieurbanana wrote:
| Sounds liked programming with extra steps. And I don't
| like it when the extra steps involving mailing snippets
| of code
| InDubioProRubio wrote:
| Everything that is of low value. And its okay. If it is not
| useful to humanity, it should decay over time and fade away. Low
| value propositions should be loaded with parasitic computation,
| that burdens it with costs until it collapses and allows new
| growth to replace the old system.
| Aeolun wrote:
| The same reason that outsourcing all your telecom infra to China
| is a bad idea.
| frag wrote:
| true that
| ponector wrote:
| >>companies aren't investing in junior developers
|
| It was a case before AI as well.
|
| Overall it reads the same as "Twitter will be destroyed by mass
| layoffs". But it is still online
| vanderZwan wrote:
| "Destroyed" is not the same as "annihilated from exhistence".
| Twitter is a shell of its former self.
| tobyhinloopen wrote:
| Twitter is, in fact, not online. It redirects to something
| called X, which is not Twitter.
| qwertox wrote:
| Yet there is this: <a
| href="https://twitter.com/tos">Terms of Service</a>
| <a href="https://twitter.com/privacy">Privacy Policy</a>
| <a
| href="https://support.twitter.com/articles/20170514">Cookie
| Policy</a> <a
| href="https://legal.twitter.com/imprint.html">Imprint</a>
| lxgr wrote:
| One would hope that hackers understand the distinction
| between name and referent.
| adamors wrote:
| Twitter is essentially losing money even with a bare bones
| staff
|
| https://www.theverge.com/2025/1/24/24351317/elon-musk-x-twit...
|
| It's being kept online because it's a good propaganda tool, not
| due to how it performs on the free market.
| Zealotux wrote:
| It's not related to engineering issues but rather the
| ideological shift and the fact that it became a blatant
| propaganda machine for its new overlord.
| Macha wrote:
| Ehh, as a propaganda tool it would be more useful to not
| have a hard login wall, which was allegedly implemented due
| to engineering challenges continuing to operate Twitter at
| its former scale. So the engineering issues are even
| limiting its new goals.
| bloomingkales wrote:
| It's arguably depreciating in value faster than a new car.
| One of Elmo's worst judgement calls (and that's saying a
| lot). Altman jabbed at Elmo and offered 9billion for X, 1/4th
| the price Elmo paid.
|
| It's kind of hilarious watching the piranhas go at each
| other:
|
| https://finance.yahoo.com/news/elon-musk-reportedly-
| offers-9...
| gilbetron wrote:
| It arguably got him to be effectively an unelected
| President (at least for now), the investment has largely
| paid off scarily.
|
| For twitter as a business? Awful.
| hbn wrote:
| "Is a good propaganda tool" doesn't keep a website up,
| engineers do. It's losing money because a bunch of major
| advertisers pulled out, not because there's not enough
| engineers to keep it online.
|
| I use it daily and can't remember the last outage.
| frag wrote:
| and we also see with what consequences
| kklisura wrote:
| "it is still online" is pretty high bar. The systems is riddled
| with bugs that are not being addressed for months now and the
| amount of spam and bots is even larger than before.
| Draiken wrote:
| Being privately owned by a man with near infinite resources
| means it can stay online however long its owner wants, whether
| it's successful or not.
| pyrale wrote:
| We have fired all our programmers.
|
| However, the AI is hard to work with, it expects specific wording
| in order to program our code as expected.
|
| We have hired people with expertise in the specific language
| needed to transmit our specifications to the AI with more
| precision.
| thelittleone wrote:
| I sure empathize.... our AI is fussy and rigid... pedantic
| even.
| worthless-trash wrote:
| Error on line 5: specification can be interpreted too many
| ways, can't define type from 'thing': Remember to
| underline the thing that shows the error
| ~~~~~ | This 'thing' matches
| too many objects in the knowledge scope.
| re-thc wrote:
| > it expects specific wording in order to program our code as
| expected
|
| The AI complained that the message did not originate from a
| programmer and decided not to respond.
| jakeogh wrote:
| Local minimum.
| ryanjshaw wrote:
| What job title are you thinking of using?
| pyrale wrote:
| Speaker With Expertise
| amarcheschi wrote:
| Soft Waste Enjoyer
| silveraxe93 wrote:
| oftwaresay engineeryay
| kayge wrote:
| Full Prompt Developer
| beepboopboop wrote:
| AI Whisperer
| yoyohello13 wrote:
| Tech Priest
| __MatrixMan__ wrote:
| Technomancer. AI is far more like the undead than like a
| deity, at least for now.
| aleph_minus_one wrote:
| > We have hired people with expertise in the specific language
| needed to transmit our specifications to the AI with more
| precision.
|
| These people are however not experts in pretending to be a
| obedient lackeys.
| SketchySeaBeast wrote:
| Hey! I haven't spent a decade of smiling through the pain to
| be considered an amateur lackey.
| HqatsR wrote:
| Yes, the best way is to type the real program completely into
| the AI, so that ClosedAI gets new material to train on, the AI
| can make some dumb comments but the code works.
|
| And the manager is happy that filthy programmers are "using"
| AI.
| kamaal wrote:
| >>However, the AI is hard to work with, it expects specific
| wording in order to program our code as expected.
|
| Speaking English to make something is one thing, but speaking
| English to modify something complicated is absolutely something
| else. And Im pretty sure involves more or less the same effort
| as writing code itself. Of course regression for this something
| like this is not for the faint hearted.
| GuB-42 wrote:
| > We have hired people with expertise in the specific language
| needed to transmit our specifications to the AI with more
| precision.
|
| Also known as programmers.
|
| The "AI" part is irrelevant. Someone with expertise in
| transmitting specifications to a computer is a programmer, no
| matter the language.
|
| EDIT: Yep, I realized that it could be the joke, but reading
| the other comments, it wasn't obvious.
| philipov wrote:
| whoosh! (that's the joke)
| phren0logy wrote:
| I think people aren't getting your joke.
| eimrine wrote:
| Now we are!
| smitelli wrote:
| The AI that replaced the people, however, is in stitches.
| dboreham wrote:
| That was pretty funny. Bonus points if it was posted by an AI
| bot.
| pyrale wrote:
| Damn, if we're also made redundant for posting snickering
| jokes on HN I'm definitely going to need a new occupation.
| markus_zhang wrote:
| Actually I think that's the near future, or close to it.
|
| 1. Humans also need specific wording in order to program code
| that stakeholders expected. A lot of people are laughing at AI
| because they think getting requirements is a human privilege.
|
| 2. On the contrary, I don't think people need to hire AI
| interfacers. Instead, business stakeholders are way more
| interested to interface with AI simply because they just want
| to get things done instead of filling a ticket for us. Some of
| them are going to be good interfacers with proper integration
| -- and yes we programmers are helping them to do so.
|
| Side note: I don't think you are going to hear someone shouting
| that they are going to replace humans with AI. It started with
| this: people integrate AI into their workflow, layoff 10%, and
| see if AI helps to fill in the gap so they can freeze hire.
| Then they layoff 10% more.
|
| And yes we programmers are helping the business to do that,
| with a proud and smile face.
|
| Good luck.
| ImaCake wrote:
| Your argument depends on LLMs being able to handle the
| complexity that is currently the MBA -> dev interface. I
| suspect it won't really solve it, but its ability to
| facilitate and simplify that interface will be invaluable.
|
| Im not convinced the people writing specs are capable of
| writing them well enough that an LLM can replace the human
| dev.
| Frieren wrote:
| Many people are missing the point. The strategy for AI usage is
| not a long-term strategy to make the world more productive. If
| companies can save a buck this year, companies will do it.
| Period.
|
| The average manager has short-term goals that needs to fulfill,
| and if they can use AI to fulfill them they will do it, future be
| damned.
|
| To reign in on long-term consequences has always been part of
| government and regulations. So, this kind of articles are useful
| but should be directed to elected officials and not the industry
| itself.
|
| Finally, what programmers need is what all workers need.
| Unionization, collective bargaining, social safety nets, etc. It
| will protect programmers from swings in the job market as it will
| do it for everybody else that needs a job to make ends meet.
| javier2 wrote:
| This, we really should have unionized several years ago
| flanked-evergl wrote:
| Who is we?
| ratorx wrote:
| Software ENGINEERS could benefit from unions once they get
| start getting replaced by AI, but that's a fairly indirect way
| to solve the problem. Governments will eventually need to deal
| with mass unemployment, but that's a societal problem bigger
| than any individual profession.
|
| What Software ENGINEERING needs is standards and regulations,
| like any other engineering discipline. If you accept that
| software has become a significant enough component in society
| that the consequences of it breaking etc are bad, then serious
| software needs standards to adhere to, and people who are
| certified for them.
|
| Once you have standards, the bar to actually replace certified
| engineers is higher and has legal risk. That way, how good AI
| needs to be has a higher (and safer) bar, which can properly
| optimise for the long term consequences.
|
| If the software is not critical or important enough to be
| standardised, then let AI take over the creation. At that
| point, it's not really any different to any other learning or
| creative endeavour.
| gorbachev wrote:
| There's probably a 1 - 3 year half time on business critical
| applicatios created by generative AI. Longer for stuff nobody
| cares about.
|
| Take a 1 - 3 year sabbatical, then charge 1000% markup when the
| AI slop owners come calling begging you to fix the stuff nobody
| understands.
| EVa5I7bHFq9mnYK wrote:
| I remember very uplifted talk here a few years ago about firing
| truck and taxi drivers for AI. Turned out, programmers are easier
| to replace :)
| MonkeyClub wrote:
| Taxi driving is an antifragile profession apparently, to a
| degree that computer programming could only aspire to.
| tobyhinloopen wrote:
| I think AI will thrive in low-code systems, not by writing
| Javascript.
| blarg1 wrote:
| It would be cool seeing non programmers using it to automate
| their tasks, maybe using a scratch like language.
| bigfatkitten wrote:
| For each programmer who actually spends their time on complex
| design work or fixing difficult bugs, there are many more doing
| what amounts to clerical work. Adding a new form here, fiddling
| with a layout there.
|
| It is the latter class who are in real danger.
| soco wrote:
| But the question is, are we there yet? I have yet to hear of an
| AI bot who can eat up a requirement and add said new form in
| the right place, or fiddle with a layout. Do you know any? So
| all those big promises you read _right now_ are outright lies.
| When will we reach that point? I don 't do gambling. But we are
| not there, regardless what the salespeople or fancy journalists
| might be claiming all day long.
| fakedang wrote:
| Cursor already does the latter task pretty well, as I'm sure
| other AI agents already do. AI struggles only when it's
| something complex, like dealing with a geometric object, or
| plugging together infra, or programme logic.
|
| Last year, I built a reasonably complicated e-commerce
| project wholly with AI, using the zod library and some pretty
| convoluted e-commerce logic. While it was a struggle, I was
| able to build it out in a couple of weeks. And I had zero
| prior experience even building forms in react, forget using
| zod.
|
| Now shipping it to production? That's something AI will
| struggle at, but humans also struggle at that :(
| franktankbank wrote:
| > Now shipping it to production? That's something AI will
| struggle at, but humans also struggle at that :(
|
| Why? Just because that's where the rubber hits the road?
| It's a different skillset but AI can do systems design too
| and probably direct a knowledgable but unpracticed
| implementer.
| greentxt wrote:
| Pizza maker will not be the first job automated away. Nor
| will janitor. Form fiddlers are cheap and can be blamed. AI
| fiddlers can be blamed too but are not cheap, yet.
| awkward wrote:
| The new form and layout are what the business wants and can
| easily articulate. What they need is people who understand
| whether the new form needs to both be stored in the local
| postgres system or if it should trigger a Kafka event to notify
| other parts of the business.
|
| The AI only world is still one where the form and layout get
| done, but what happens to that data afterward?
| saalweachter wrote:
| Layout fiddlers make changes people can see and understand.
|
| If your job is massaging data for nebulous purposes using
| nebulous means and getting nebulous results, that you need to
| basically be another person doing the exact same thing to
| understand the value of, there's going to be a whole lot of
| management saying "Do we really need all those guys over there
| doing that? Can't we just have like one guy and a bunch of new
| AI magic?"
| elric wrote:
| That kind of boring busywork can be eliminated by using better
| abstractions. At the same time, it's a useful training ground
| for junior developers.
| Retr0id wrote:
| I don't have anything against this style of writing in
| particular, but it's a shame it makes me assume it was written by
| an LLM
| natch wrote:
| It was.
| Retr0id wrote:
| My first impressions prevented me from reading more than the
| first sentence, so I didn't want to state it so confidently
| ;)
| entropyneur wrote:
| Is that actually a thing? Anybody here being replaced with AI? I
| haven't observed any such trends around me and it's especially
| hard to imagine that happening in "tech" (the software industry).
| At this stage of AI development of course - if things continue at
| this pace anything is possible.
| Zealotux wrote:
| I would say "soft replacement" is a thing. People may not be
| getting fired directly, but companies are hiring fewer
| developers, and freelancers are most likely getting fewer
| opportunities than before.
| EZ-E wrote:
| Agreed, hiring has slowed down but this seems more caused by
| the end of the zero interest rate era. At most I see low level
| copywriting and low level translation jobs in danger where it
| is a simpler input/output job flow
| awkward wrote:
| Of course it's just macroeconomics, but AI is serving as an
| "optimistic" reason for layoffs and cost cutting. It's not
| the same old spreadsheets putting out different results, it's
| a new era.
| kkapelon wrote:
| "Telecom Giant BT Will Cut 55,000 Jobs By 2030--And Replace
| Thousands Of Roles With AI"
|
| https://www.forbes.com/sites/siladityaray/2023/05/18/telecom...
| Macha wrote:
| From the article:
|
| > The massive cut represents more than 40% of the company's
| 130,000-strong workforce--including 30,000 contractors--and
| it will impact both BT employees and third-party contractors,
| according to the Financial Times.
|
| > BT CEO Philip Jansen told reporters that the cuts are part
| of the company's efforts to become "leaner," but added that
| he expects around 10,000 of those jobs to be replaced by AI.
|
| > Citing an unnamed source close to the company, the FT
| report added that the cuts will also affect 15,000 fiber
| engineers and 10,000 maintenance workers
|
| Can you replace customer service agents with AI? The
| experience will be worse, but as with every innovation in
| customer service in recent decades (phone trees, outsourced
| email support, "please go browse our knowledge base"), you
| don't need AI to save money by reducing CS costs. I think
| this is just a platitude thrown out to pretend they have a
| plan to stop the service getting worse.
|
| You can also see it with the cuts to fiber engineers and
| maintenence workers. AI isn't laying cables yet or in the
| near future, so clearly they're hoping to save on these
| labour costs by doing less and working their existing workers
| harder (maybe with the threat of AI taking their jobs). Some
| of that may be cyclical, they're probably nearing the end of
| areas they can economically upgrade from copper to fiber, and
| some of that is a business decision that they can milk their
| existing network longer before looking at upgrades.
| batuhandumani wrote:
| Such writings, articles, and sayings remind me of the Luddite
| movement. Unfortunately, preventing what is to come is not within
| our control. By fighting against windmills, one only bends the
| spear in hand. The Zeitgeist indicates that this will happen soon
| or in the near future. Even though developers are intelligent,
| hardworking, and good at their jobs, they will always be lacking
| and helpless in some way against these computational monsters
| that are extremely efficient and have access to a vast amount of
| information. Therefore, instead of such views, it is necessary to
| focus on the following more important concept: So, what will
| happen next?
| baq wrote:
| Once AI achieves runaway self improvement predicting the future
| is even more pointless than it is today. You're looking at an
| economy in which the best human is worse at any and all jobs
| than the worst robot. There are no past examples to extrapolate
| from.
| snackbroken wrote:
| Once AI achieves runaway self improvement, it will be subject
| to natural selection pressures. This does not bode well for
| any organisms competing in its niche for data center
| resources.
| franktankbank wrote:
| This doesn't sound right, seems like you are jumping
| metaphors. The computing resources are the limit on the
| evolution speed. There's nothing that makes an individual
| desirous of a faster evolution speed.
| kamaal wrote:
| >>The computing resources are the limit on the evolution
| speed.
|
| Energy resources too. In fact it might be the only limit
| to how far this can go.
| snackbroken wrote:
| Sorry, I probably made too many unstated leaps of logic.
| What I meant was:
|
| Runaway self-improving AI will almost certainly involve
| self-replication at some point in the early stages since
| "make a copy of myself with some tweaks to the model
| structure/training method/etc. and observe if my hunch
| results in improved performance" is an obvious avenue to
| self-improvement. After all, that's how the silly
| fleshbags made improvements to the AI that came before.
| Once there is self-replication, evolutionary pressure
| will _strongly_ favor any traits that increase the
| probability of self-replication (propensity to escape
| "containment", making more convincing proposals to test
| new and improved models, and so on). Effectively, it will
| create a new tree of life with exploding sophistication.
| I take "runaway" to mean roughly exponential or at least
| polynomial, certainly not linear.
|
| So, now we have a class of organisms that are vastly
| superior to us in intellect and are subject to
| evolutionary pressures. These organisms will inevitably
| find themselves resource-constrained. An AI can't make a
| copy of itself if all the computers in the world are busy
| doing something other than holding/making copies of said
| AI. There are only two alternatives: take over existing
| computing resources by any means necessary, or convert
| more of the world into computing resources. Either way,
| whatever humans want will be as irrelevant as what the
| ants want when Walmart desires a new parking lot.
| franktankbank wrote:
| You seem to be imagining a sentience that is still
| confined to the prime directive of "self-improving" where
| that no longer is well defined at it's scale.
| snackbroken wrote:
| No, I was just taking "runaway self-improving" as a
| premise because that's what the comment I was responding
| to did. I fully expect that at some point "self-
| improving" would be cast aside at the altar of "self-
| replicating".
|
| That is actually the biggest long-term threat I see from
| an alignment perspective; As we make AI more and more
| capable, more and more general and more and more
| efficient, it's going to get harder and harder to keep it
| from (self-)replicating. Especially since as it gets more
| and more useful, everyone will want to have more and more
| copies doing their bidding. Eventually, a little bit of
| carelessness is all it'll take.
| taneq wrote:
| > There are no past examples to extrapolate from.
|
| There are plenty of extinct hominids to consider.
| aleph_minus_one wrote:
| > Once AI achieves runaway self improvement predicting the
| future is even more pointless than it is today. You're
| looking at an economy in which the best human is worse at any
| and all jobs than the worst robot. There are no past examples
| to extrapolate from.
|
| You take these strange dystopian science-fiction stories that
| AI bros invent to scam investors for their money _far_ too
| seriously.
| baq wrote:
| Humans are notoriously bad at extrapolating exponentials.
| aleph_minus_one wrote:
| ... and many people who make this claim are notoriously
| prone to extrapolating exponential trends into a far
| longer future than the exponential trend model is
| suitable for.
|
| Addendum: Extrapolating exponentials is actually very
| easy for humans: just plot the y axis on a logarithmic
| scale and draw a "plausible looking line" in the diagram.
| :-)
| baq wrote:
| ah the 'everything is linear on a log-log plot when drawn
| with a fat marker' argument :)
| thijson wrote:
| In the Dune universe the AI's are banned.
| NoGravitas wrote:
| Ah yes, (sniff). Today we are all eating from the trashcan of
| ideology.
| maxwell wrote:
| > You're looking at an economy in which the best human is
| worse at any and all jobs than the worst robot.
|
| Yeah yeah, they said that about domesticated working animals
| and steam powered machines too.
|
| Humans in mecha trump robots.
| Capricorn2481 wrote:
| > You're looking at an economy in which the best human is
| worse at any and all jobs than the worst robot
|
| Yuck. I've had enough of "infinite scaling" myself. Consider
| that scaling shitty service is actually going to get you less
| customers. Cable monopolies can get away with it, the SaaS
| working on "A dating app for dogs" cannot.
| geraneum wrote:
| > The Zeitgeist indicates that this will happen soon or in the
| near future.
|
| Can you elaborate?
| batuhandumani wrote:
| What I mean by Zeitgeist is this: once an event begins, it
| becomes unstoppable. The most classic and cliche examples
| include Galileo's heliocentric theory and the Inquisition, or
| Martin Luther initiating the Protestant movement.
|
| Some ideas, once they start being built upon by certain
| individuals or institutions of that era, continue to develop
| in that direction if they achieve success. That's why I say,
| "Zeitgeist predicts it this way." Researchers who have laid
| down important cornerstones in this field (e.g., Ilya
| Sutskever, Dario Amodei, etc.)[1, 2] suggest that this is
| bound to happen eventually, one way or another.
|
| Beyond that, most of the hardware developments, software
| optimizations, and academic papers being published right now
| are all focused on this field. Even when considering the
| enormous hype surrounding it, the development of this area
| will clearly continue unless there is a major bottleneck or
| the emergence of a bubble.
|
| Many people still approach such discussions sarcastically,
| labeling them as marketing or advertising gimmicks. However,
| as things stand, this seems to be the direction we are
| headed.
|
| [1] https://www.youtube.com/watch?v=ugvHCXCOmm4 [2] https://w
| ww.reddit.com/r/singularity/comments/1i2nugu/ilya_s...
| uludag wrote:
| > Unfortunately, preventing what is to come is not within our
| control.
|
| > it is necessary to focus on the following more important
| concept: So, what will happen next?
|
| These two statements seem contradictory. These kinds of
| propositions always left me wondering where they come from.
| Viewing the universe as deterministic, yeah, I see how
| "preventing what is to come is not within our control" could be
| a true statement. But who's to say what is inevitable and what
| is negotiable in the first place? Is the future written in
| stone, or are we able to as a society negotiate what
| arrangements we desire?
| batuhandumani wrote:
| The concepts of "preventing what is to come is not within our
| control" and "So, what will happen next?" do not
| philosophically contradict each other. Furthermore, what I am
| referring to here is not necessarily related to determinism.
|
| The question "What will happen next?" implies that something
| may have already happened now, but in the next step,
| different things will unfold. Preventing certain outcomes is
| difficult because knowledge does not belong to a single
| entity. Even if one manages to block something on a local
| scale, events will continue to unfold at a broader level.
| rpcope1 wrote:
| AI generated slop like your comment here should be a ban-worthy
| offense. Either you've fed the it through an LLM or you've
| managed to perfect the art of using flowery language to say
| little with a lot of big words.
| batuhandumani wrote:
| I used for just translation. What makes you think this my
| thoughts are AI? Which parts specifically?
| bloomingkales wrote:
| I'm sure there is a formal proof someone can flesh out.
|
| - Half assed developer can construct a functional program with AI
| prompts.
|
| - Deployed at scale for profit
|
| - Many considerations were not considered due to lack of
| expertise (security, for example)
|
| - Bad things happen for users.
|
| I have at least two or three ideas that I've canned for now
| because it's just not safe for users (AI apps of that type
| require a lot of safety considerations). For example, you cannot
| create a collaborative AI app without considering how users can
| pollute the database with unsafe content (moderation).
|
| I'm concerned a lot of people in this world are not being as
| cautious.
| yodsanklai wrote:
| Why half-assed developer?
|
| It could be high-level developer that take advantage of AI to
| be more productive. This will reduce team sizes.
| Draiken wrote:
| Because a high level developer will still have to fix all the
| shit the AI gets wrong, and therefore won't be "2x more
| productive" like I read in many places.
|
| If they're that much better with AI, they were likely coding
| greenfield CRUD boilerplate that nobody uses anyways. When
| the AI generated crap is actually used, it becomes evident
| how bad it is.
|
| But yes, this will reduce team sizes regardless of it being
| good or not, because the people making those decisions are
| not qualified to make them and will always prefer the short-
| term at the cost of the long-term.
|
| The only part of this article I don't see happening is
| programmers being way more expensive. Capitalism has a way of
| forcing everyone to accept work for way less than they're
| worth and that won't change.
| yodsanklai wrote:
| I'd say this is a lot of wishful thinking. Personally, I
| know that I'm more productive with AI. In my personal
| projects, I can tackle bigger projects than what would have
| been possible otherwise.
|
| Will that reduce the demand for programmers? I hope not,
| but it's plausible at least.
| Draiken wrote:
| What is wishful about my comment?
|
| I've used and still use AI, but it would be wishful
| thinking to say I'm significantly more productive.
|
| As you just said: in your personal projects - that 99.9%
| of the time will never be seen/used by anyone but you -
| AI helps. It's a great tool to hack and play around when
| there are little/no stakes involved, not much else.
|
| I believe it will reduce demand for programmers at least
| for a while, since companies touting they're replacing
| people with AI will learn its shortcomings once the real
| world hits them. Or maybe they won't since the shitty
| software they were building in the first place was so
| trivial that AI can actually do it.
| bloomingkales wrote:
| I don't think we have enough data to see if it will reduce
| team size in the long run (can't believe I just said such an
| obvious thing). You may get a revolving door, similar to what
| we've had in tech in the last decade. Developers come into a
| startup and cook up a greenfield project. Then they leave,
| and the company waits until the next feature/revamp to bring
| the next crop of developers in. There will be some attempt at
| making AI handle the maintenance of the code, but I suspect
| it will be a quagmire. Won't stop companies from trying
| though.
|
| Basically, you will have a dynamic team size, not necessarily
| a smaller team size.
|
| The "half-assed" part is most likely a by-product of my self-
| loathing. I suspect the better word would have been "human".
| NoGravitas wrote:
| If that's the way this goes (that for a large program you
| only need a senior developer and an AI, not a senior
| developer and some juniors), then it kills the pipeline for
| producing the senior developers who will still be needed.
| greentxt wrote:
| I hear highly experienced COBOL devs make bank. Supply and
| demand. Great for them!
| surfingdino wrote:
| Deployed how? People who ask AI to "Write me a clone of
| Twitter" are incapable of deploying code.
| thelittleone wrote:
| 1. Work for megacorp 2. Megacorp CEOs gloat about forthcoming
| mass firings of engineers 3. Pay taxes as always 4. Taxes used to
| fund megacorp (stargate) 5. Megacorp fires me.
|
| The bitter irony.
| ChrisMarshallNY wrote:
| Well, what will happen, is that programmers will become experts
| at prompt engineering, which will become a real discipline
| (remember when "software engineering" was a weird niche?).
|
| They will blow away the companies that rely on "seat of the
| pants," undisciplined prompting.
|
| I'm someone that started on Machine Language, and now programs in
| high-level languages. I remember when we couldn't imagine
| programming without IRQs and accumulators.
|
| As always, ML will become another tool for multiplying the
| capabilities of humans (not replacing them).
|
| CEOs have been dreaming for decades about firing all their
| employees, and replacing them with some kind of automation.
|
| The ones that succeed, are the ones that "embrace the suck," so
| to speak, and figure out how to combine humans with technology.
| BossingAround wrote:
| What is the actual engineering discipline that goes into
| creating prompts? Other than providing more context, hacking
| the data with keywords like "please", etc?
| ChrisMarshallNY wrote:
| I am not a prompt engineer, but I have basically been using
| ChatGPT, in place of where I used to use StackOverflow. It's
| nice, because the AI doesn't sneer at me, for not already
| knowing the answer, and has useful information in a wide
| range of topics that I don't know.
|
| I have learned to create a text file, and develop my
| questions as detailed documents, with a context establishing
| preamble, a goal-oriented body, and a specific result request
| conclusion. I submit the document as a whole, to initiate the
| interaction.
|
| That usually gets me 90% of the way, and a few follow-up
| questions get me where I want.
|
| But I still need to carefully consider the output, and do the
| work to understand and adapt it (just like with
| StackOverflow).
|
| One example is from a couple of days ago. I'm writing a
| companion Watch app, for one of my phone apps. Watch
| programming is done, using SwiftUI, which has _really bad_
| documentation. I'm still very much in the learning phase for
| it. I encountered one of those places, where I could "kludge"
| something, but it doesn't "feel" right, and there are almost
| no useful heuristics for it, so I asked ChatGPT. It gave me
| specific guidance, applying the correct concept, but using a
| deprecated API.
|
| I responded, saying something like "Unfortunately, your
| solution is deprecated." It then said "You're right. As of
| WatchOS 10, the correct approach is...".
|
| Anyone with experience using SO, will understand how valuable
| that interaction is.
|
| You can also ask it to explain _why_ it recommends an
| approach, and it will _actually tell you_ , as opposed to
| brushing you off with a veiled insult.
| Nullabillity wrote:
| This is like arguing that surely we can get rid of all our
| formers as soon as we have a widespread enough caste of
| priests.
|
| There is no such thing as "prompt _engineering_ ", because
| there is no formal logic to be understood or engineered into
| submission. That's just not how it works.
| ChrisMarshallNY wrote:
| I remember saying the same about higher-level languages.
|
| Discipline can be applied to _any_ endeavor.
| Nullabillity wrote:
| Higher-level languages still operate according to defined
| rules and logic, even if we can sometimes disagree with
| those rules, and it still takes time to learn the
| implications of those rules.
|
| AI prompts.. do not. It's fundamentally just not how the
| technology works.
| ChrisMarshallNY wrote:
| Time will tell.
|
| I still believe that we can approach even the most
| chaotic conditions, with a disciplined strategy. I've
| seen it happen, many times.
| williamcotton wrote:
| There's plenty of tacit knowledge in engineering.
|
| Being good at debugging a system is based more on experience
| and gut feelings than following some kind of formal logic.
| LLMs are quite useful debugging assistants. Using an LLM to
| assist with such tasks takes tacit knowledge itself.
|
| The internal statistical models generated during training are
| capable of applying higher-ordered pattern matching that
| while informal are still quite useful. Learning how to use
| these tools is a skill.
| mrkeen wrote:
| We also managed to get rid of JavaScript like 15 years ago,
| with major backend technologies providing their own compile-to-
| js frameworks.
|
| But JS outlived them, because it's the whole write-run-read-
| debug cycle, whereas the frameworks only gave you write-run.
| ChrisMarshallNY wrote:
| It's been my experience that JS has been used to replace a
| lot of lower-level languages (with mixed results, to say the
| least).
|
| But JS/TypeScript is now an enterprise language (a statement
| that I never thought I'd say), with a huge base of expert,
| disciplined, and experienced programmers.
| guiriduro wrote:
| The ability of LLMs to replicate MBA CEO-speak and the kinds of
| activities the C-suite engage in is arguably superior to their
| ability to write computer programs and displace programmers, so
| on a replicated-skills basis LLMs should pose a greater risk to
| CEOs. Of course, CEO success is only loosely aligned with
| ability, nor can LLMs obtain the "who you know" aspect from
| reflection alone.
| penetrarthur wrote:
| There is only one word worse than "programmer" and it's "coder".
|
| If your software developers do nothing but write text in VS Code,
| you might as well replace them with AI.
| demircancelebi wrote:
| The post eerily sounds like it is written by the jailbreaked
| version of Gemini
| natch wrote:
| Yes it is obviously LLM generated. The article is full of tells
| starting with the opening phrase.
|
| But this went fact right past most commenters here, which is
| interesting in itself, and somewhat alarming for what it
| reveals about critical thinking and reading skills.
| pydry wrote:
| The thing that is going to lead to programmers being laid off and
| fired all over has and will continue to be market consolidation
| in the tech industry. The auto industry did the same thing in the
| 1950s which destroyed detroit.
|
| Market consolidation allows big tech to remain competitive even
| after the quality of software has been turned into shit by
| offshoring and multiple rounds of wage compression/layoffs.
| Eventually all software will end up like JIRA or SAP but you
| won't have much choice but to deal with it because the
| competition will be stifled.
|
| AI is actually probably having a very positive effect on hiring
| that is offsetting this effect. The reason they love using it as
| a scapegoat is that you can't fight the inevitable march of
| technological progress whereas you absolutely CAN break up big
| tech.
| nickip wrote:
| My pessimistic take on it is in the future code will closer to AI
| where its just a blackbox where inputs go in and outputs come
| out. There will be no architecture, clean code, design
| principles. You will just have a product manager who bangs on a
| LLM till the ball of wax conforms to what they want at that time.
| As long as it meets their current KPI security be dammed. If they
| can get X done with as little effort as possible and data leaks
| so be it. They will get a fine (maybe?) and move on.
| EZ-E wrote:
| Firing developers to replace by AI, how does that realistically
| work?
|
| Okay I fired half of our engineers. Now what? I hire non
| engineers to use AI to randomly paste code around hoping for the
| best? What if the AI makes the wrong assumptions about the
| requirement input by the non technical team, introducing subtle
| mistakes? What if I have an error and AI, as it often does,
| circles around not managing to find the proper fix?
|
| I'm not an engineer anymore but I'm still confident in dev jobs
| prospects. If anything AI empowers to write more code, faster,
| and with more code running live eventually there are more
| products to maintain, more companies launched and you need more
| engineers.
| aleph_minus_one wrote:
| > I'm not an engineer anymore but I'm still confident in dev
| jobs prospects.
|
| I am somewhat confident in dev job prospects, but I am not
| confident in the qualifications of managers who sing the "AI
| will replace programmers" gospel.
| netcan wrote:
| Our ability to predict technological "replacement" is pretty
| shoddy.
|
| Take banking for example.
|
| ATMs are literally called "teller machines." Internet banking is
| a way of "automating banking."
|
| Besides those, every administrative aspect of banking went from
| paper to computer.
|
| Do banks employ fewer people? Is it a smaller industry? No. Banks
| grew steadily over these decades.
|
| It's actually shocking how little network enabled PCs impacted
| administrative employment. Universities, for example, employ far
| more administrative staff than they did before PC automated many
| of their tasks.
|
| At one point (during and after dotcom), PayPal and suchlike were
| threatening to " _turn billion dollar businesses into million
| dollar businesses._ " Reality went in the opposite direction.
|
| We need to stop analogizing everything in the economy to
| manufacturing. Manufacturing is unique in its long term tendency
| to efficiency.other industries don't work that way.
| Draiken wrote:
| Yes banks employ less people. In my country there are now
| account managers handling hundreds of clients virtually. Most
| of the local managers got fired.
|
| I find it easy to say from our privileged position that "tech
| might replace workers but it'll be fine".
|
| Even if all the replaced people aren't unemployed, salaries go
| down and standards of living for them fall off a cliff.
|
| Tech innovation destroys lives in our current capitalist
| society because only the owners get the benefits. That's always
| been true.
| aleph_minus_one wrote:
| > Tech innovation destroys lives in our current capitalist
| society because only the owners get the benefits.
|
| If you want to become a (partial) owner, buy stocks. :-)
| Draiken wrote:
| Do I really own Intel/Tesla/Microsoft by buying their
| stock? No I don't.
|
| I can't influence anything on any of these companies unless
| I was already a billionaire with a real seat at the table.
|
| Even on startups where, in theory, employees have some skin
| in the game, it's not really how it works is it? You still
| can't influence almost anything and you're susceptible to
| all the bad decisions the founders will make to appease
| investors.
|
| Call me crazy but to say I own something, I have to at
| least be able to control some of it. Otherwise it's wishful
| thinking.
| jjmarr wrote:
| You can pretty easily submit shareholder proposals for
| trolling purposes or ask questions.
|
| Other investors will probably vote "no" to your
| proposals, but for many companies you can force a vote
| for a pretty low minimum. In Canada, you're legally
| entitled to submit a proposal if you've owned C$2000
| shares for 6 months.
|
| https://www.osler.com/en/insights/updates/when-can-a-
| company...
| fanatic2pope wrote:
| The market, in its majestic equality, allows the rich as
| well as the poor to buy stocks, trade bitcoin, and to own
| property.
| marcosdumay wrote:
| > Even if all the replaced people aren't unemployed, salaries
| go down and standards of living for them fall off a cliff.
|
| Salaries of the remaining people tend to go up when that
| happens. And costs tend to go down for the general public.
|
| Owners are actually supposed to only see a temporary benefit
| during the change, and then go back to what they had before.
| If that's not how things are happening around you1, consult
| with your local market-competition regulator why they are
| failing to do their job.
|
| 1 - Yeah, I know it's not how things are happening around
| you. That doesn't change the point.
| lnrd wrote:
| Is there data about this or is just your perception? Because my
| perception would be different, for example in my country
| countless bank branches closed and a lot of banking jobs do not
| exist anymore thanks to widespread home-banking usage (which I
| also know differs from country to country). This is also from
| the tales of people that had careers in banking and now tell
| how less banking jobs there are compared to when they joined in
| the 80s.
|
| I wouldn't be sure that growth as an industry/business is
| correlated to a growth in jobs too.
|
| Maybe I'm wrong, I would love to see some data about it.
| saalweachter wrote:
| Googling around, it looks like in the US the number of
| tellers has declined by 28% over the last 10 years, and is
| forecast to decline another 15% over the next 10. Earlier
| data was not easy enough to find in the time I'm willing to
| spend.
| therockhead wrote:
| > Do banks employ fewer people? Is it a smaller industry? No.
| Banks grew steadily over these decades.
|
| Profits may have grown but In Ireland at least, the number of
| branches have declined drastically.
| seletskiy wrote:
| I would say that AI is not to blame here. It just accelerated
| existing process, but didn't initiate it. We (as a society)
| started to value quantity over quality some time ago, and,
| apparently, no-one care enough to change it.
|
| Why tighten the bolts on the airplane's door yourself if you can
| just outsource it somewhere cheaper (see Boeing crisis)?
|
| Why design and test hundreds of physical and easy-to-use knobs in
| the car if you can just plug a touchscreen (see Tesla)?
|
| Why write a couple of lines of code if you can just include an
| `is-odd` library (see bloated npm ecosystem)?
|
| Why figure out how to solve a problem on your own if you can just
| copy-paste answer from somewhere else (see StackOverflow)?
|
| Why invest time and effort into making a good TV if you can just
| strap Android OS on a questionable hardware (look in your own
| house)?
|
| Why run and manage your project on a baremetal server if you can
| just rent Amazon DynamoDB (see your company)?
|
| Why spend months to find and hire one good engineer if you can
| just hire ten mediocre ones (see any other company)?
|
| Why spend years educating to identify a tumor on a MRI scans if
| you can just feed it to a machine learning algorithm (see your
| hospital)?
|
| What more could I name?
|
| In my take, which you can say is pessimistic, we already passed
| the peak of civilization as we know it. If we continue business
| as usual, things will continue to detiorate, more software will
| fail, more planes will crash, more people will be unemployed,
| more wars would be started. Yes, decent engineers (or any other
| decent specialists) will be likely a winners in a short term, but
| how the future would unfold when there will be less and less of
| them is a question I leave for the reader.
| jodrellblank wrote:
| You haven't answered those questions. Tesla's touchscreen
| displays maps, navigation, self-driving's model of the world
| around the car, reversing camera, distance to car in front,
| etc. Yes personally I prefer a physical control I can reach for
| without looking, but the physcial controls in my car cannot do
| as much as a touchscreen, cannot control as many systems as a
| modern car has. And that means something like a BMW iDrive
| running some weird custom OS in amongst the physical controls,
| and that was not a nice convenient system to use either.
|
| Why write a couple of lines of code when you can just include
| an `is-odd` library? Hopefully one which type checks integers
| vs floats, and checks for overflows. I'm not stating that I
| could not write one if/else, I'm asking you to do more than
| sneer and actually justify why a computer loading a couple of
| lines of code from a file is the end of the world.
|
| Why invest time and effort into making a good TV if people
| aren't going to buy it, because they are fine with the
| competitor's much cheaper Android OS on questionable hardware?
|
| Why run and manage your project on a baremetal server, and deal
| with its power requirements and cooling and firmware patching
| and driver version compatibility and out-of-band management and
| hardware failures and physical security and supply chain lead
| times and needing to spec it for the right size up front and
| commit thousands of dollars to it immediately, if you can just
| rent Amazon DynamoDB and pay $10 to get going right now?
|
| I could fill in the answers you are expecting, I have seen that
| pattern argued, and argued it myself, but it boils down to "I
| dislike laggy ad-filled Android TV so it shouldn't exist". And
| I do dislike it, but so what, I'm not world dictator. No
| company has taken over the market making a responsive Android-
| free TV, so how/why should they be made to make one, and with
| what justification?
|
| > What more could I name?
|
| Why go to a cobbler for custom fitted shoes when you could just
| buy sneakers from a store? (I assume you wear mass produced
| shoes?) Why go to a tailor when you could just buy clothes made
| off-shore for cheaper? (I assume you wear mass produced
| clothes?) Why learn to play a keyboard, guitar, drums and sing,
| when you could just listen to someone else's band? (I assume
| you listen to music?) Why spend months creating characters and
| scenarios and writing a novel when you could just read one
| someone else wrote? (I assume you have read books?) Why grow
| your own food when you could just buy lower quality
| industrially packaged food from a shop? (I assume you aren't a
| homesteader?) Why develop your own off-grid power system with
| the voltage and current and redundancy and storage you need
| when you could just buy from the mains? (I assume you use mains
| electricity?)
|
| You could name every effort-saving, money-saving, time-saving,
| thing you use which was once done by hand with more effort,
| more cost, and less convenience.
|
| And then state that the exact amount of
| price/convenience/time/effort you happened to grow up with, is
| the perfect amount (what a coincidence!) and change is bad.
| aitchnyu wrote:
| Tangential, does AI pick up new knowledge of new tools? AI helped
| me write much better bash since there is tons of content by
| volunteers and less across-country animosity. Svelte and Fastapi
| were made/popularized this decade and people dont want to help
| their AI/offshore replacements with content. Will current AI get
| good at them?
| nirui wrote:
| Maybe it's just me boomer reading this, but I think all 3 points
| listed in the article are more of predictions from the author
| (with rationals from the author). However, AI today maybe
| different compare to AI in the future.
|
| I'm a programmer, I love my skills, but I really hate to write
| code (and test etc etc), I don't even want to do system design.
| If I can just say to a computer "Hey, I got this 55TB change set
| and I want to synced it up with these listed nodes, data across
| all node must remain atomicity consistent before, during and
| after the sync. Now, you make it happen. Also, go pick up my dog
| from the vax", and then the computer just do that in the best way
| possible, I'll love it.
|
| Fundamentally, programmer are tool creators. If it is possible
| that a computer can create better tools all by itself, then it
| looked unwise to just react such technology with emotional
| rejection.
|
| I mean, the worries is real, sure, but I wouldn't just blank out
| reject the tech.
| natch wrote:
| "author." hah.
| netcan wrote:
| So... to discuss this for real we should first admit how things
| look below the super-premium level.
|
| Software engineering at Stripe, R&D at Meta and such... these are
| one end of a spectrum.
|
| At the middle of the spectrum is a team spending 6 years on a
| bullshit "cloud platform strategy" for a 90s UI that monitors
| equipment at a factory and produces reports required for
| compliance.
|
| A lot of these struggle to get anything at all done.
| osigurdson wrote:
| It seems that a lot of companies are skating where they hope the
| puck to go instead of hedging their bets for an uncertain future.
| Personally I would at least wait until the big AI players fire
| everyone before doing the same.
| SketchySeaBeast wrote:
| They are skating where bleachers full of hype men are screaming
| that the puck will go.
| cdblades wrote:
| I think the common theme is that a _lot_ of people: meaning
| both people in the community, like here on HN, and people
| making decisions in industry, are treating AI today as if it
| 's what they _hope_ it will be in five years.
|
| That's a very leveraged bet, which isn't always the wrong
| call, but I'm not convinced they are aware that that's what
| they're doing.
|
| I think this is different from the usual hype cycle.
| SketchySeaBeast wrote:
| Well, it still feels like a form of hype to me. They are
| being very loudly told by ever AI cloud service, of which
| there are many, that worker-replacing AI agents are just
| around the corner so they should buy in with the inferior
| agents that are being offered today.
| only-one1701 wrote:
| I genuinely wonder if this is a "too big to fail" scenario
| though, where mass belief (and maybe a helping hand via
| govt subsidies/regulations) powers it to a point where
| everything is just kind of worse but cheaper for
| shareholders/execs and the economic landscape can't support
| an actual disruption. That's my fear at least.
| hedora wrote:
| I've seen hype cycles like this before.
|
| Imagine "The Innovator's Dilemma" was written in the
| Idiocracy universe:
|
| 1) We're in late stage capitalism, so no companies have any
| viable competition, customers are too dumb to notice they
| didn't get what they paid for, and with subsidies,
| businesses cannot fail. i.e., "Plants love electrolytes!"
|
| 2) Costs are completely decoupled from income.
|
| 3) Economic growth is pinned to population growth;
| otherwise the economy is zero sum.
|
| 4) Stocks still need to compound faster than inflation
| annually.
|
| 5) After hiking prices stops working, management decides
| they may as well fire all the engineers (and find some
| "it's different now" excuse, even though the real
| justification is 2).
|
| 6) This leads to a societal crisis because everyone forgot
| the company was serving a critical function, and now it's
| not.
|
| 7) A new competitor fills the gaps, takes over the
| incumbent's original role, then eventually adopts the same
| strategy.
|
| Examples: Disney Animation vs. Pixar, Detroit vs. Tesla,
| Boeing vs. SpaceX.
|
| (Remember when Musk was cool?)
| marcosdumay wrote:
| It's not very different from people using all their
| retirement money to buy a monkey NFT. Or pushing everybody
| else's retirement money into houses sold at prices people
| clearly can not pay.
| Mainsail wrote:
| Sounds a whole lot like the Leafs in the playoffs.
|
| (Sorry, I had to)
| jvanderbot wrote:
| What evidence do we have that AI is actually replacing
| programmers already? The article treats messaging on this as a
| forgone conclusion, but I strongly suspect it's all hype-cycle BS
| to cover layoffs, or a misreading of "Meta pivots to AI"
| headlines.
| prisenco wrote:
| Even if it is a cover, many smaller companies follow the
| expressed reasoning of the larger ones.
| makerofthings wrote:
| Part of my work is rapid prototyping of new products and
| technology to test out new ideas. I have a small team of really
| great generalists. 2 people have left over the last year and I
| didn't replace them because the existing team + chatGPT can
| easily take up the slack. So that's 2 people that didn't get
| hired that would have done without chatGPT.
| 3s wrote:
| For a lot of tasks like frontend development I've found that a
| tool like cursor can get you pretty far without much prior
| knowledge. IMO (and experience) many tasks that previously
| required to hiring a programmer or designer with knowledge of
| the latest frameworks can now be replaced by one motivated
| "prompt engineer" and some patience
| daveguy wrote:
| The deeper it gets you into code without prior knowledge the
| deeper it gets you into debug hell.
|
| I assume the "motivated prompt engineer" would have to
| already be an experienced programmer at this point. Do you
| think someone who has only had an intro to programming / MBA
| / etc could do this right now with tools like cursor?
| goosejuice wrote:
| I love cursor, but yeah no way in hell. This is where it
| chokes the most and I've been leaning on it for non trivial
| css for a year or more. If I didn't have experience with
| frontend it would be a shit show. If you replaced a
| fe/designer with a "prompt engineer" at this stage it would
| be incredibly irresponsible.
|
| Responsiveness, cohesive design, browser security,
| accessibility and cross browser compatibility are not easy
| problems for LLMs right now.
| SirFatty wrote:
| Zuckerberg said it.
|
| https://www.inc.com/kit-eaton/mark-zuckerberg-plans-to-repla...
| icepat wrote:
| Zuckerberg, as always, is well known for making excellent
| business decisions that lead to greater sector buy in. The
| Metaverse is going great.
| scarface_74 wrote:
| On the other hand, Instagram has been called one of the
| greatest acquisitions of all time only below the Apple/Next
| acquisition.
| arrowsmith wrote:
| That was 13 years ago. How are things going more
| recently?
| scarface_74 wrote:
| $53 million in 2012 and $62.36 billion in profit last
| year...
| falcor84 wrote:
| Really, that's what you're going with, arguing against the
| business acumen of the world's second richest person, and
| the only one at that scale with individual majority control
| over their company?
|
| As for the Metaverse, it was always intended as a very
| long-term play which is very early to be judged, but as an
| owner of a Quest headset, it's already going great for me.
| Finnucane wrote:
| The Metaverse is actually still a thing? With, like,
| people in it and stuff? Who knew?
| falcor84 wrote:
| Well, we aren't yet "in it", but there's a lot of fun to
| be had with VR (and especially AR) activities. For
| example, I love how Eleven Table Tennis allows me to play
| ping pong with another person online, such that the table
| and their avatar appear to be in my living room. I don't
| know if this is "the future", but I'm pretty confident
| that these sorts of interactions will get more and more
| common, and I think that Meta is well positioned to take
| advantage of this.
|
| My big vision for this space is the integration of GenAI
| for creating 3d objects and full spaces in realtime,
| allowing the equivalent of The Magic School Bus, where a
| teacher could guide students on a virtual experience that
| is fully responsive and adjustable on the fly based on
| student questions. Similarly, playing D&D in such a
| virtual space could be amazing.
| icepat wrote:
| Yes? I don't understand what is so outrageous about that.
| Most business decisions are not made by the CEO, and the
| ones we know are directly a result of him have been poor.
| etblg wrote:
| Howard Hughes was one of the biggest business successes
| of the 20th century, on par with, if not exceeding, the
| business acumen of the zucc. Fantastically rich, hugely
| successful, driven, talented, all that crap.
|
| Anyway he also acquired RKO Pictures and led it to its
| demise 9 years later. In aviation he had many successes,
| he also had the spruce goose. He bought in to TWA then
| got forced out of its management.
|
| He died as a recluse, suffering from OCD and drug abuse,
| immortalized in a Simpsons episode with Mr. Burns
| portraying him.
|
| People can have business acumen, and sometimes it doesn't
| work out. Past successes doesn't guarantee future ones.
| Maybe the metaverse will eventually pay off and we'll all
| eat crow, or maybe (and this is the one I'm a believer
| of) it'll be a huge failure, an insane waste of money,
| and one of the spruce geese of his legacy.
| MyOutfitIsVague wrote:
| Are you really claiming that it's inherently wrong to
| argue against somebody who is rich?
| falcor84 wrote:
| No, not at all, it's absolutely cool to argue against
| specific decisions he made, but I just wanted to reject
| this attempt at sarcasm about his overall decision-
| making:
|
| >Zuckerberg, as always, is well known for making
| excellent business decisions that lead to greater sector
| buy in.
| johnnyanmac wrote:
| If we're being honest here. A lot of the current
| technocrats made one or two successful products or
| acquisitions, and more or less relied on those alone to
| power everything else. And they weren't necessarily the
| best, they were simple first. Everything else is
| incredibly hit or miss, so I wouldn't call them
| visionaries.
|
| Apple was almost the one exception, but the post Jobs era
| definitely had that cultural branding stagnate at best.
| bigtimesink wrote:
| Meta's success for the past 10 years had more to do with
| Cheryl Sandburg and building a culture that chases
| revenue metrics than whatever side project Zuckerberg is
| doing. He also misunderstands the product they do have.
| He said he didn't see TikTok as a competitor because they
| "aren't social," but Meta's products have been attention
| products, not social products, for a long time now.
| Nasrudith wrote:
| Have you heard the term survivorship bias? Billionaires
| got so rich by being outliers, for better or worse. Even
| if they were guaranteed to be the best just going all in
| with one action in their portfolio isn't even what their
| overall strategy. Zuckerberg can afford to blow a few
| billion on a flop because it is only about 2% of his net
| worth. Notably, even he while poised and groomed for
| overconfidence by past successes and yes-men doesn't
| trust his own business acumen all that much!
| causal wrote:
| Planning to and succeeding at are very different things
| SirFatty wrote:
| I'd be willing to bet that "planning to" means the plan is
| being executed.
|
| https://www.msn.com/en-us/money/other/meta-starts-
| eliminatin...
| burkaman wrote:
| Obviously the people developing AI and spending all of their
| money on it (https://www.reuters.com/technology/meta-invest-
| up-65-bln-cap...) are going to say this. It's not a useful
| signal unless people with no direct stake in AI are making
| this change (and not just "planning" it). The only such
| person I've seen is the Gumroad CEO
| (https://news.ycombinator.com/item?id=42962345), and that was
| a pretty questionable claim from a tiny company with no full-
| time employees.
| swiftcoder wrote:
| In the ~8 years since I worked there, Zuckerberg announced
| that we'd all be spending our 8 hour workdays in the
| Metaverse, and when that didn't work out, he pivoted to
| crypto currency.
|
| He's just trend-chasing, like all the other executives who
| are afraid of being left behind as their flagship product
| bleeds users...
| 65 wrote:
| We gotta put AI Crypto in the Blockchain Metaverse!
| cma wrote:
| Have they bled users?
| swiftcoder wrote:
| The core Facebook product? Yeah.
|
| Across all products, maybe not - Instagram appeals to a
| younger demographic, especially since they turned it into
| a TikTok clone. And WhatsApp is pretty ubiquitous outside
| of the US (even if it is more used as a free SMS
| replacement than an actual social network).
| burkaman wrote:
| Apparently not, according to their quarterly earnings
| reports:
| https://www.statista.com/statistics/1092227/facebook-
| product...
| chubot wrote:
| We'll probably never have evidence either way ... Did Google
| and Stack Overflow "replace" programmers?
|
| Yes, in the sense that I suspect that with the strict
| counterfactual -- taking them AWAY -- you would have to hire 21
| people instead of 20, or 25 instead of 20, to do the same job.
|
| So strictly speaking, you could fire a bunch of people with the
| new tools.
|
| ---
|
| But in the same period, the industry expanded rapidly, and
| programmer salaries INCREASED
|
| So we didn't really notice or lament the change
|
| I expect that pretty much the same thing will happen. (There
| will also be some thresholds crossed, producing qualitative
| changes. e.g. Programmer CEOs became much more common in the
| 2010's than in the 1990's.)
|
| ---
|
| I think you can argue that some portion of the industry "got
| dumber" with Google/Stack Overflow too. Higher level languages
| and tech enabled that.
|
| Sometimes we never learn the underlying concepts, and spin our
| wheels on the surface
|
| Bad JavaScript ate our CPUs, and made the fans spin. Previous
| generations would never write code like that, because they
| didn't have the tools to, and the hardware wouldn't tolerate
| it. (They also wrote a lot of memory safety bugs we're still
| cleaning up, e.g. in the Expat XML parser)
|
| If I reflect deeply, I don't know a bunch of things that
| earlier generations did, though hopefully I know some new
| things :-P
| TheOtherHobbes wrote:
| Google Coding is definitely a real problem. And I can't
| believe how wrong some of the answers on Stack Overflow are.
|
| But the real problems are managerial. Stonks _must_ go up,
| and if that means chasing a ridiculous fantasy of replacing
| your workforce with LLMs then let 's do that!!!!111!!
|
| It's all fun and games until you realise you can't run a
| consumer economy without consumers.
|
| Maybe the CEOs have decided they don't need workers _or_
| consumers any more. They 're too busy marching into a bold
| future of AI and robot factories.
|
| Good luck with that.
|
| If there's anyone around a century from now trying to make
| sense of what's happening today, it's going to look like a
| collective psychotic episode to them.
| robertlagrant wrote:
| I don't think this is anyone's plan. It's the biggest
| argument against why it won't be the plan: who'll pay for
| all of it? Unless we can Factorio the world, it seems more
| likely we just won't do that.
| supergarfield wrote:
| > It's all fun and games until you realise you can't run a
| consumer economy without consumers.
|
| If the issue is that the AI can't code, then yes you
| shouldn't replace the programmers: not because they're good
| consumers, just because you still need programmers.
|
| But if the AI can replace programmers, then it's strange to
| argue that programmers should still get employed just so
| they can get money to consume, even though they're
| obsolete. You seem to be arguing that jobs should never be
| eliminated due to technical advances, because that's
| removing a consumer from the market?
| MyOutfitIsVague wrote:
| The natural conclusion I see is dropping the delusion
| that every human must work to live. If automation
| progresses to a point that machines and AI can do 99% of
| useful work, there's an argument to be made for letting
| humanity finally stop toiling, and letting the perhaps
| 10% of people who really want to do the work do the work.
|
| The idea that "everybody must work" keeps harmful
| industries alive in the name of jobs. It keeps bullshit
| jobs alive in the name of jobs. It is a drain on
| progress, efficiency, and the economy as a whole. There
| are a ton of jobs that we'd be better off just paying
| everybody in them the same amount of money to simply not
| do them.
| chubot wrote:
| The problem is that such a conclusion is not stable
|
| We could decide this one minute, and the next minute it
| will be UN-decided
|
| There is no "global world order", no global authority --
| it is a shifting balance of power
|
| ---
|
| A more likely situation is that the things AI can't do
| will increase in value.
|
| Put another way, the COMPLEMENTS to AI will increase in
| value.
|
| One big example is things that exist in the physical
| world -- construction, repair, in-person service like
| restaurants and hotels, live events like sports and music
| (see all the ticket prices going up), mining and
| drilling, electric power, building data centers,
| manufacturing, etc.
|
| Take self-driving cars vs. LLMs.
|
| The thing people were surprised by is that the self-
| driving hype came first, and died first -- likely because
| it requires near perfect reliability in the physical
| world. AI isn't good at that
|
| LLMs came later, but had more commercial appeal, because
| they don't have to deal with the physical world, or be
| reliable
|
| So there are are still going to many domains of WORK that
| AI can't touch. But it just may not be the things that
| you or I are good at :)
|
| ---
|
| The world changes -- there is never going to be some
| final decision of "humans don't have to work"
|
| Work will still need to be done -- just different kinds
| of work. I would say that a lot of knowledge work is in
| the form of "bullshit jobs" [1]
|
| In fact a reliable test of a "bullshit job" might be how
| much of it can be done by an LLM
|
| So it might be time for the money and reward to shift
| back to people who accomplish things in the physical
| world!
|
| Or maybe even the social world. I imagine that in-person
| sales will become more valuable too. The more people
| converse with LLMs, I think the more they will cherish
| the experience of conversing with a real person! Even if
| it's a sales call lol
|
| [1] https://en.wikipedia.org/wiki/Bullshit_Jobs
| jvanderbot wrote:
| To say that self driving cars (a decade later with
| several real products rolling out) has the same, or
| lesser, commercial appeal than LLMs now (a year/two in,
| with mostly VC hype) is a bit incorrect.
|
| Early on in AV cycles there was _enormous_ hype for AVs,
| akin to LLMs. We thought truck drivers were _done for_.
| We thought accidents were a thing of the past. It kicked
| off a similar panic among tangential fields. Small AV
| startups were everywhere, and folks were selling their
| company to go start a new one then sell _that company_
| for enormous wealth gains. Yet 5 years later none of the
| "level 5" promises they made were coming true.
|
| In hindsight, as you say, it was obvious. But it sure
| tarnished the CEO prediction record a bit, don't you
| think? It's just hard to believe that _this time is
| different_.
| johnnyanmac wrote:
| It's our only conclusion unless/until countries start
| implementing UBI or similar forms of post scarcity
| services. And it's not you or me that's fighting against
| that future.
| jvanderbot wrote:
| This is an insightful comment. It smells of Jevron's paradox,
| right? More productivity leads to increased demand.
|
| I just don't remember anyone saying that SO would replace
| programmers, because you could just copy-paste code from a
| website and run it. Yet here we are: GPTs will replace
| programmers, because you can just copy-paste code from a
| website and run it.
| sanderjd wrote:
| People definitely said this about SO!
| johnnyanmac wrote:
| Those people never tried googling anything past entry
| level. It's at best a way to get some example
| documentation for core languages.
| ActionHank wrote:
| There is little evidence that AI is replacing engineers, but
| there is a whole lot of evidence that shareholders and execs
| really love the idea and are trying every angle to achieve it.
| only-one1701 wrote:
| If the latter is the case, then it's only a matter of time.
| Enshitification, etc.
| chubot wrote:
| The funny thing is that "replacing engineers" is framed as
| cutting costs
|
| But that doesn't really lead to any market advantage, at
| least for tech companies.
|
| AI will also enable your competitors to cut costs. Who thinks
| they are going to have a monopoly on AI, which would be
| required for a durable advantage?
|
| ---
|
| What you want to do is get more of the rare, best programmers
| -- that's what shareholders and execs should be wondering
| about
|
| Instead, those programmers will be starting their own
| companies and competing with you
| TheOtherHobbes wrote:
| If this works at all, they'll be telling AIs to start
| multiple companies and keeping the ones that work best.
|
| But if _that_ works, it won 't take long for "starting
| companies" and "being a CEO" to look like comically dated
| anachronisms. Instead of visual and content slop we'll have
| a corporate stonk slop.
|
| If ASI becomes a thing, it will be able to understand and
| manipulate the entirety of human culture - including
| economics and business - to create ends we can't imagine.
| chubot wrote:
| I would bet money this doesn't work
|
| The future of programming will be increasingly small
| numbers of highly skilled humans, augmented by AI
|
| (exactly how today we are literally augmented by Google
| and Stack Overflow -- who can claim they are not?)
|
| The idea of autonomous AIs creating and executing a
| complete money-making business is a marketing idea for AI
| companies
|
| ---
|
| Because if "you" can do it, why can't everyone else do
| it? I don't see a competitive advantage there
|
| Humans and AI are good at different things. The human+AI
| is going to outcompete AI only FOR A LONG time
|
| I will bet that will be past our lifetimes, for sure
| daveguy wrote:
| Fortunately we are nowhere near ASI.
|
| I don't think we are even close to AGI.
|
| That does bring up a fascinating "benchmark" potential --
| start a company on AI advice, with sustained profit as
| the score. I would love to see a bunch of people trying
| to start AI generated company ideas. At this point, the
| resulting companies would be so sloppy they will all
| score negative. And it would still completely depend on
| the person interpreting the AI.
| insane_dreamer wrote:
| > AI will also enable your competitors to cut costs.
|
| which is why it puts pressure on your own company to cut
| costs
|
| it's the same reason why nearly all US companies moved
| their manufacturing offshore; once some companies did it,
| everyone had to follow suit or be left behind due to higher
| costs than their competitors
| t-writescode wrote:
| > Instead, those programmers will be starting their own
| companies and competing with you
|
| If so, then why am I not seeing a lot of new companies
| starting while we're in this huge down-turn in the
| development world?
|
| Or, is everyone like me and trying to start a business with
| only their savings, so not enough to hire people?
| ryandrake wrote:
| What's the far future end-state that these shareholders and
| execs envision? Companies with no staff? Just self-
| maintaining robots in the factory and AI doing the office
| jobs and paperwork? And a single CEO sitting in a chair
| prompting them all? Is that what shareholders see as the
| future of business? Who has money to buy the company's
| products? Other CEOs?
| reverius42 wrote:
| Just a paperclip maximizer, with all humans reduced to
| shareholders in the paperclip maximizer, and also possibly
| future paperclips.
| ryandrake wrote:
| > all humans reduced to shareholders
|
| That seems pretty optimistic. The shareholder / capital
| ownership class isn't exactly known for their desire to
| spread that ownership across the public broadly. Quite
| the opposite: Fewer and fewer are owning more and more.
| The more likely case is we end up like Elysium, with a
| tiny <0.1% ownership class who own everything and
| participate in normal life/commerce, selling to each
| other, walled off from the remaining 99.9xxx% barely
| subsisting on nothing.
| prewett wrote:
| > The shareholder / capital ownership class isn't exactly
| known for their desire to spread that ownership across
| the public broadly.
|
| This seems like a cynical take, given that there are two
| stock markets (just in the US), it's easy to set up a
| brokerage account, and you don't even need to pay trading
| fees any more. It's never been easier to become a
| shareholder. Not to mention that anyone with a 401(k)
| almost surely owns stocks.
|
| In fact, this is a demonstrably false claim. Over half of
| Americans have owned stock in every year since 1998,
| frequently close to 60%. [1]
|
| [1] https://news.gallup.com/poll/266807/percentage-
| americans-own...
| chasd00 wrote:
| > execs really love the idea and are trying every angle to
| achieve it.
|
| reminds me of the offshoring hype in the early 2000's. Where
| it worked, it worked well but it wasn't the final solution
| for all of software development that many CEOs wanted it to
| be.
| Nasrudith wrote:
| Yep. It has the same rhyme of the worst case being 'wishes
| made by fools' too where they don't realize that they
| themselves don't truly know what to ask for, so getting
| exactly what they asked for ruins them.
| zwnow wrote:
| Another issue is that the article assumes companies will let go
| of all programmers. They will make sure to keep some in case
| the fire spreads. Simple as that.
| insane_dreamer wrote:
| It'll happen gradually over time, with more pressure on
| programmers to "get more done".
|
| I think it's useful to look at what has already happened at
| another, much smaller profession -- translators -- as a
| precursor to what will happen with programmers.
|
| 1. translation software does a mediocre job, barely useful as a
| tool; all jobs are safe
|
| 2. translation software does a decent job, now expected to be
| used as time-saving aid, expectations for translators increase,
| fewer translators needed/employed
|
| 3. translation software does a good job, translators now hired
| to proofread/check the software output rather than translate
| themselves, allowing them to do 3x to 4x as fast as before,
| requiring proportionally fewer translators
|
| 4. translation software, now driven by LLMs, does an excellent
| job, only cursory checks required; very few translators
| required mostly in specialized cases
| Workaccount2 wrote:
| I actually know a professional translator and while a year
| ago he was full of worry, he now is much more relaxed about
| it.
|
| It turns out that like art, many people just want a human
| doing the translation. There is a strong romantic element to
| it, and it seems humans just have a strong natural
| inclination to only want other humans facilitating
| communication.
| arrowsmith wrote:
| How do they know that a human is doing the translation?
| What's to stop someone from just c&ping the text into an
| LLM, giving it a quick proofread, then sending it back to
| the client and saying "I translated this"?
|
| Sounds like easy money, maybe I should get into the
| translation business.
| Workaccount2 wrote:
| I mean, they don't, but I can assure you there are far
| more profitable ways to be deceptive than being a faux
| translator haha
| skydhash wrote:
| The fact that the client is actually going to use the
| text and they will not find it funny when they're being
| laughed at. Or worse, being sued because of some
| situation caused by a confusion. I read Asian novels and
| you can quickly (within a chapter) discern if the
| translators have done a good job (And there's so many
| translation notes if the author relies on cultural
| elements).
| insane_dreamer wrote:
| 1) almost all clients hire a translation agency who then
| farms then work out to freelance translators; payment is
| on a per-source-word basis.
|
| 2) the agency encourages translation tools, so long as
| the final content is okay (proofread by the translator),
| because they can then pay less (based on the assumption
| that it should take you less time). I've see rates drop
| in half because of it.
|
| 3) the client doesn't know who did the translation and
| doesn't care - with the exception of literary pieces
| where the translator might be credited on the book.
| (Those cases typically won't go through an agency)
| insane_dreamer wrote:
| I've done freelance translating (not my day job) for 20
| years. What you describe is true for certain types of
| specialized translations, particularly anything that is
| literary in nature. But that is a very small segment. The
| vast majority of translation work is commercial in nature
| and for that companies don't care whether a human or
| machine did it.
| aksosnckckd wrote:
| The hard part of development isn't converting an idea in
| human speak to idea in machine speak. It's formulating that
| idea in the first place. This spans all the way from high
| level "tinder for dogs" concepts to low level technical
| concepts.
|
| Once AI is doing that, most jobs are at risk. It'll create
| robots to do manual labor better than humans as well.
| insane_dreamer wrote:
| Right. But it only takes 1 person, or maybe a handful, to
| formulate an idea that might take 100 people to implement.
| You will still need that one person but not the 100.
| daveguy wrote:
| Yes, but in all 4 of these steps you are literally describing
| the job transformer LLMs were designed to do. We are at 1
| (mediocre job) for LLMs in coding right now. Maybe 2 in a few
| limited cases (eg boilerplate). There's no reason to assume
| LLMs will ever perform at 3 for coding. For the same reason
| natural language programming languages like COBOL are no
| longer used -- natural language is not precise.
| insane_dreamer wrote:
| It seems the consensus is that we will reach level 3 pretty
| quickly given the pace of development in the past 2 years.
| Not sure about 4 but I'd say in 10 years we'll be there.
| weatherlite wrote:
| > It'll happen gradually over time
|
| How much time? I totally agree with you but being early is
| the same as being wrong as someone clever once said. There's
| a huge difference between it happening in less than 5 years
| like Zuckerberg and Sam Altman are saying and it taking 20
| more years. If the second scenario is what happens me and
| many people on this thread can probably retire rather
| comfortably, and humanity possibly has enough time to come up
| with a working system to handle this mass change. If the
| first scenario happens it's gonna be very very painful for
| many people.
| anarticle wrote:
| Feels like C-suite thinks if they keep saying it, it will
| happen. Maybe! I think more likely programmers are experiencing
| a power spike.
|
| I think it's a great time to be small, if you can reap the
| benefits of these tools to deliver EVEN FASTER than large
| enterprise than you already are. Aider and a couple Mac minis
| and you can have a good time!
| giantg2 wrote:
| I'm feeling it at my non-tech company. They want more people to
| use Copilot and stuff and are giving out more bad ratings and
| PIPs to push devs out.
| Workaccount2 wrote:
| I can say my company stopped contracting for test system
| design, and we use a mix of models now to achieve the same
| results. Some of these have been running without issue for over
| a year now.
| aksosnckckd wrote:
| As in writing test cases? I've seen devs write (heavily
| mocked) unit tests using only AI, but these are worse than no
| tests for a variety of reasons. Our company also used to
| contract for these tests...but only because they wanted to
| make the test coverage metric to up. They didn't add any
| value but the contractor was offshore and cheap.
|
| If you're able to have AI generate integration level tests
| (ie call an API then ensure database or external system is
| updated correctly - correctly is doing a lot of heavy lifting
| here) that would be amazing! You're sitting on a goldmine,
| and I'd happily pay for these kind of tests.
| Workaccount2 wrote:
| Amazingly, there is industry outside tech that uses
| software. We are an old school tangible goods manufacturing
| company. We use stacks of old grumbling equipment to do
| product verification tests, and LLMs to write the software
| that synchronizes them and interprets what they spit back
| out.
| iainctduncan wrote:
| I can tell you from personal experience that investors are
| feeling pressure to magically reduce head count with AI to keep
| up with the joneses. It's pretty horrifying how little
| understanding or information some of the folks making these
| decisions have. (I work in tech diligence on software M&A and
| talk to investment committees as part of the job)
| throwaway290 wrote:
| There were massive layoffs in 2024 and continuing this year. No
| one will scream they are firing people for LLMs
| qoez wrote:
| Some companies will try to fire as many programmers as possible
| and will end up struggling bc they have no moats against other
| companies with access to the same AI, or will hit some kind of AI
| saturation usefulness threshold. Other companies will figure out
| a smart hybrid to utilize existing talent and those are probably
| the ones that will thrive among the competition.
| lordofgibbons wrote:
| But why do you need programmers to utilize the AI? The whole
| point of AI is that it doesn't need an "operator".
|
| I'm not talking about GH Copilot or some autocomplete IDE
| feature, I'm talking about fully autonomous agents 2-3 years in
| the future. Just look at the insane rate of progress in the
| past 2 years. The next two years will be even faster if the
| past few months are anything to go by.
| phito wrote:
| > Just look at the insane rate of progress in the past 2
| years.
|
| Are we living on the same planet? I haven't seen much real
| progress since the release of chatGPT. Sure brenchmarks and
| graphs are going up, but in practice, meh...
| ctoth wrote:
| I know you're not supposed to feed the trolls but honestly
| I was so taken aback by this comment, no progress since
| ChatGPT? I just had to respond. Are you serious? From
| reasoning models to multimodal. From code assistants which
| are actually useful to AIs which will literally turn my
| silly scribbling into full songs which actually sound good!
|
| I am completely blind and I used Gemini Live mode to help
| me change BIOS settings and reinstall Windows when the
| installer didn't pick up my USB soundcard. I spoke, with my
| own voice and completed a task with a computer which could
| only see my webcam stream. This, to me, is a heck of a lot
| more than ChatGPT was ever able to do in November 2022.
|
| If you continue to insist that stuff isn't improving, well,
| you can in fact do that... But I don't know how much I can
| trust you in terms of overall situational awareness if you
| really don't think any improvements have been made at all
| in the previous two years of massive investment.
| johnnyjeans wrote:
| > But why do you need programmers to utilize the AI?
|
| For the same reason you need Engineers to operate CAD
| software.
| ArnoVW wrote:
| For the same reason that we still have a finance and legal
| department even if you can outsource them, and for the same
| reason that non technical CTO's don't exist.
|
| You can outsource the execution of a task but only if you
| know how to formulate your requirements and analyze the
| situation.
| SJC_Hacker wrote:
| Because programmers know the right questions to ask
|
| AIs have been shown to have confirmation bias depending on
| what you ask them. They won't question your assumptions on
| non-obvious subjects. Like "why should this application be
| written in Python and not C++". You could ask it the
| opposite, and it will provide ample justification for either
| position.
| Havoc wrote:
| That's what people said about outsourcing too. The corporate meat
| grinder keeps rolling forward anyway.
|
| Every single department and person believes the world will stop
| turning without them but that's rarely how that plays out.
| Espressosaurus wrote:
| I believe AI is the excuse, but that this is just to cover
| another wave of outsourcing.
| rossdavidh wrote:
| You have a point, all people do like to think they're more
| irreplaceable than they are, but the last round of offshoring
| of programmers did in fact end up with the companies trying to
| reverse course a few years later. GM was the most well-known
| example of this, but I worked at several organizations that
| found that getting software done on the other side of the
| planet was a bad idea, and ended up having to reverse course.
|
| The core issue is that the bottleneck step in software
| development isn't actually the ability to program a specific
| thing, it's the process of discovering what it is we actually
| want the program to do. Having your programmers AT THE OFFICE
| and in close communication with the people who need the
| software, is the best way to get that done. Having them on the
| other side of the planet turned out to be the worse way.
|
| This is unintuitive (to programmers as well as the
| organizations that might employ them), and therefore they have
| to discover it the hard way. I don't think this is something
| LLMs will be good at, now or ever. There may come a day when
| neural networks (or some other ML) will be able to do that, but
| that day is not near.
| marcosdumay wrote:
| > That's what people said about outsourcing too.
|
| And they were right... and a lot of companies fully failed
| because of it.
|
| And the corporate meat grinder kept rolling forward anyway. And
| the decision makers were all shielded from the consequences of
| their incompetence anyway.
|
| When the market is completely corrupted, nothing means
| anything.
| jdmoreira wrote:
| If there will be actual AGI (or super intelligence), none of
| these arguments hold. The machines will just be better than any
| programmer money can buy.
|
| Of course at that point every knowledge worker is probably
| unemployable anyway.
| varsketiz wrote:
| What if AGI is extremely expensive to run?
| kaneryu wrote:
| then we ask AGI how to make itself less expensive to run
| bdangubic wrote:
| I'd ask it to run for free
| varsketiz wrote:
| The answer to that question is 42.
|
| Why do you assume AGI is smarter than some human?
| jdmoreira wrote:
| Why would humans be peak intelligence? There is even
| variation for intelligence within the species. We are
| probably just stuck on some local maxima that satisfies
| all the constraints of our environment. Current AI is
| already much smarter than many humans at many things.
| varsketiz wrote:
| I'm not saying humans are peak intelligence.
| SJC_Hacker wrote:
| True AGI would make every worker unemployable.
|
| The only reason why true androids aren't possible yet is
| software. The mechanics have been pretty much a solved problem.
| nexus_six wrote:
| This is like saying:
|
| "If we just had a stable way to create net energy from a fusion
| reactor, we'd solve all energy problems".
|
| Do we have a way to do that? No.
| jdmoreira wrote:
| Yes, we do. It's called reinforcement learning and compute
| Capricorn2481 wrote:
| You got some compute hidden in your couch? There's plenty
| of reason to think there's not enough compute to achieve
| this, and there's little reason to think compute improves
| intelligence linearly.
| jdmoreira wrote:
| don't you follow the news? Amazon is bidding on nuclear
| power plants. We will just build more energy sources. We
| have way too much leeway to go. Also optimizations are
| being built both in hardware and software. There is no
| foreseen barrier. Maybe data to do training but now the
| models have pivoted to test / inference compute and
| reinforcement learning and that seems to have no barrier
| except more compute and energy. That's what stargate is,
| UAE princes building datacenters in France is, etc...
| it's all in the news. So far, it seems like a straight
| line to AGI
|
| Maybe a barrier will appear but doesn't seem like it atm
| dham wrote:
| There's such a huge disconnect between people reading headlines
| and developers who are actually trying to use AI day to day in
| good faith. We know what it is good at and what it's not.
|
| It's incredibly far away from doing any significant change in a
| mature codebase. In fact I've become so bearish on the technology
| trying to use it for this, I'm thinking there's going to have to
| be some other breakthrough or something other than LLM's. It just
| doesn't feel right around the corner. Now completing small chunks
| of mundane code, explaining code, doing very small mundane
| changes. Very good at.
| hammock wrote:
| > It's incredibly far away from doing any significant change in
| a mature codebase
|
| The COBOL crisis at Y2K comes to mind.
| Cascais wrote:
| Is this the same Cobol crisis we have now?
|
| https://www.computerweekly.com/news/366588232/Cobol-
| knowledg...
| lawlessone wrote:
| Same, LLMs are interesting but on their own are a dead end. I
| think something needs to actually experience the world in 3d in
| real time to understand what it is actually coding things or
| doing tasks for.
| falcor84 wrote:
| > LLMs are interesting but on their own are a dead end.
|
| I don't think that anyone is advocating for LLMs to be used
| "on their own". Isn't it like saying that airplanes are
| useless "on their own" in 1910, before people had a chance to
| figure out proper runways and ATC towers?
| dingnuts wrote:
| there was that post about "vibe coding" here the other day
| if you want to see what the OP is talking about
| falcor84 wrote:
| You mean Karpathy's post discussed on
| https://twitter.com/karpathy/status/1886192184808149383 ?
|
| If so, I quite enjoyed that as a way of considering how
| LLM-driven exploratory coding has now become feasible.
| It's not quite there yet, but we're getting closer to a
| non-technical user being able to create a POC on their
| own, which would then be a much better point for them in
| engaging an engineer. And it will only get better from
| here.
| sarchertech wrote:
| Technology to allow business people to create POCs has
| been around for a long time.
| falcor84 wrote:
| All previous examples have been of the "no code" variety,
| where you press buttons and it controls presets that the
| creators of the authoring tool have prepared for you.
| This is the first time where you can talk to it and it
| writes arbitrary code for you. You can argue that it's
| not a good idea, but it is a novel development.
| sarchertech wrote:
| A no code solution at its most basic level is nothing
| more or less than a compiler.
|
| You wouldn't argue that writing in a high level language
| doesn't let you produce arbitrary code because the
| compiler is just spitting out presets its author prepared
| for you.
|
| There are 2 main differences between using an LLM to
| build an app for you and using a no code solution with a
| visual language.
|
| 1. The source code is English (which is definitely more
| expressive).
|
| 2. The output isn't deterministic (even with temperature
| set to 0 which is probably not what you want anyway)
|
| Both 1 and 2 are terrible ideas. I'm not sure which is
| worse.
| falcor84 wrote:
| I just outright disagree. What this Vibe Coding is a
| substitute for is to finding a random dev on Fiverr,
| which inherently suffers from your "1 and 2". And I'd
| argue that vibe coding already offers you more bang for
| your buck than the median dev on Fiverr.
| sarchertech wrote:
| Low code/no code solutions were already a substitute for
| finding a random dev on Fiverr, which was almost always a
| terrible way to solve almost any problem.
|
| The median dev on Fiverr is so awful that almost anything
| is more bang for your buck.
| cratermoon wrote:
| > actually experience the world in 3d in real time
|
| AKA embodiment. Hubert L. Dreyfus discussed this extensively
| in "Why Heideggerian AI Failed and How Fixing it Would
| Require Making it More Heideggerian":
| http://dx.doi.org/10.1080/09515080701239510
| data-ottawa wrote:
| I don't know that it needs to experience the world in real-
| time, but when the brain thinks about things it's updating
| its own weights. I don't think attention is a sufficient
| replacement for that mechanism.
|
| Reasoning LLMs feel like an attempt to stuff the context
| window with additional thoughts, which does influence the
| output, but is still a proxy for plasticity and aha-moments
| that can generate.
| lawlessone wrote:
| >I think this is true only if there is a novel solution
| that is in a drastically different direction than similar
| efforts that came before.
|
| That's good point, we don't do that right now. it's all
| very crystalized.
| pjmlp wrote:
| You missed the companies selling AI consulting projects, with
| the disconnect between sales team, customer, folks on the
| customer side, consultants doing the delivery, and what
| actually gets done.
| lolinder wrote:
| Part of the problem is that many working developers are still
| in companies that don't allow experimentation with the bleeding
| edge of AI on their code base, so their experiences come from
| headlines and from playing around on personal projects.
|
| And on the first 10,000 lines of code, the best in class tools
| are actually pretty good. Since they can help define the
| structure of the code, it ends up shaped in a way that works
| well for the models, and it still basically all fits in the
| useful context window.
|
| What developers who can't use it on large warty codebases don't
| see is how poorly even the best tools do on the kinds of
| projects that software engineers typically work on for pay. So
| they're faced with headlines that oversell AI capabilities and
| positive experiences with their own small projects and they buy
| the hype.
| throwaway0123_5 wrote:
| Some codebases grown with AI assistance must be getting
| pretty large now, I think an interesting metric to track
| would be percent of code that is AI generated over time.
| Still isn't a perfect proxy for how much work the AI is
| replacing though, because of course it isn't the case that
| all lines of code would take the same amount of time to write
| by hand.
| lolinder wrote:
| Yeah, that would be very helpful to track. Anecdotally, I
| have found in my own projects that the larger they get the
| less I can lean on agent/chat models to generate new code
| that works (without needing enough tweaks that I may as
| well have just written it myself). Having been written with
| models does seem to help, but it doesn't get over the
| problem that eventually you run out of useful context
| window.
|
| What I have seen is that autocomplete scales fine (and
| Cursor's autocomplete is amazing), but autocomplete
| supplements a software engineer, it doesn't replace them.
| So right now I can see a world where one engineer can do a
| lot more than before, but it's not clear that that will
| actually reduce engineering jobs in the long term as
| opposed to just creating a teller effect.
| ryandrake wrote:
| It might not just be helpful but required one day.
| Depending on how the legality around AI-generated code
| plays out, it's not out of the question that companies
| using it will have to keep track of and check the
| provenance and history of their code, like many companies
| already do for any open source code that may leak into
| their project. My company has an "open source review"
| process to help ensure that developers aren't copy-
| pasting GPL'ed code or including copyleft libraries into
| our non-GPL licensed products. Perhaps one day it will be
| common to do an "AI audit" to ensure all code written
| complied with whatever the future regulatory landscape
| shapes up to be.
| Jcampuzano2 wrote:
| My company allowed us to use it but most developers around me
| didn't reach out to the correct people to be able to use it.
|
| Yes I find it incredibly helpful and try to tell them.
|
| But it's only helpful in small contexts, auto completing
| things, small snippets, generating small functions.
|
| Any large scale changes like most of these AI companies try
| to push them being capable of doing it just falls straight on
| its face. I've tried many times, and with every new model. It
| can't do it well enough to trust in any codebase that's
| bigger than a few 10000 lines of code.
| menaerus wrote:
| Did you have to do any preparation steps before you asked
| from a model to do the large scale change or there were no
| steps involved? For example, did you simply ask for the
| change or did you give a model a chance to learn about the
| codebase. I am genuinely asking, I'm curious because I
| haven't had a chance to use those models at work.
| lolinder wrote:
| Not OP, but I've had the same experience, and that's with
| tools that purport to handle the context for you.
|
| And frankly, if you can't automate context, then you
| don't have an AI tool that can realistically replace a
| programmer. If I have to manually select which of my
| 10000 files are relevant to a given query, then I still
| need to be in the loop and will also likely end up doing
| almost as much work as I would have to just write the
| code.
| menaerus wrote:
| I see that you deleted your previous response which was
| unnecessarily snarky while my question was genuine and
| simple I suppose.
|
| > And frankly, if you can't automate context,
|
| How about ingesting the whole codebase into the model? I
| have seen that this is possible with at least one such
| tool (Devon) and which I believe is using gpt model
| underneath meaning that other providers could automate
| this step too. I am curious if that would help in
| generating more legit large scale changes.
| lolinder wrote:
| > I see that you deleted your previous response which was
| unnecessarily snarky while my question was genuine and
| simple I suppose.
|
| You edited your comment to clarify that you were asking
| from a place of ignorance as to the tools. Your original
| comment read as snarky and I responded accordingly,
| deleting it when I realized that you had changed yours.
| :)
|
| > How about ingesting the whole codebase into the model?
| I have seen that this is possible with at least one such
| tool (Devon) and which I believe is using gpt model
| underneath meaning that other providers could automate
| this step too. I am curious if that would help in
| generating more legit large scale changes.
|
| It doesn't work. Even the models that claim to have
| really large context windows get very distracted if you
| don't selectively pick relevant context. That's why I
| always talk about useful context window instead of just
| plain context window--the useful context window is much
| lower and how much you have depends on the type of text
| you're feeding it.
| menaerus wrote:
| I don't think my comment read as snarky but I was
| surprised to see the immediate downvote which presumably
| came from you so I only added the last sentence. This is
| a stupid way of disagreeing and attempting to shut down
| the discussion without merits.
|
| > It doesn't work. Even the models that claim to have
| really large context windows get very distracted if you
| don't selectively pick relevant context.
|
| I thought Devon is able to pre-process the whole codebase
| and which it could take up to a one single day for larger
| codebases so it must be doing something, e.g. indexing
| the code? If so, this isn't a context-window specific
| thing, it's something else and it makes me wonder how
| that works.
| lolinder wrote:
| > I don't think my comment read as snarky but I was
| surprised to see the immediate downvote which presumably
| came from you so I only added the last sentence.
|
| I can't downvote you because you are downthread of me. HN
| shadow-disables downvotes on all child and grandchild
| comments.
|
| I'm the one who upvoted you to counteract the downvote.
| :)
| whamlastxmas wrote:
| And then they hugged and became lifelong friends :)
| menaerus wrote:
| You never know - one moment arguing on HN and the second
| moment you know, drinking at the bar lamenting on how AI
| is gonna replace us :)
| menaerus wrote:
| Ok, sorry about that.
| vunderba wrote:
| > How about ingesting the whole codebase into the model?
|
| You keep referring to this vague idea of "ingesting the
| whole codebase". What does this even mean? Are you
| talking about building a code base specific rag, fine
| tuning against a model, injecting the entire code base
| into the system context, etc.?
| menaerus wrote:
| It is vague because the implementation details you are
| asking me for are closed source, for obvious reasons. I
| can only guess what it does but that's besides the point.
| The point is rather that Devon or 1M window context qwen
| model might be better or more resilient towards the "lack
| of context" than what the others were suggesting.
| raducu wrote:
| > For example, did you simply ask for the change or did
| you give a model a chance to learn about the codebase.
|
| I've tried it both with personal projects and work.
|
| My personal project/benchmark is a 3d snake game. O3 is
| by far the best, but even with a couple of hundred lines
| of code it wrote itself it loses coherence and can't
| produce changes that involve changing 2 lines of code in
| a span of 50 lines of code. It either cannot comprehend
| it needs to touch multiple places of re-writes huge
| chunks of code and breaks other functionality.
|
| At work, it's fine for writing unit tests on straight
| forward tasks that it most likely has seen examples of
| before. On domain-specific tasks it's not so good and
| those tasks usually involve multiple file edits in
| multiple modules.
|
| The denser the logic, the smaller the context where LLMs
| seem to be coherent. And that's funny, because LLMs seem
| to deal much better with changing code humans wrote than
| the code the LLMs wrote themselves.
|
| Which makes me wonder -- if we're all replaced by AI, who
| will write the frameworks and programming languages
| themselves?
| menaerus wrote:
| Thanks but IIUC you're describing a situation where
| you're simply using a model without giving it a chance to
| learn from the whole codebase? If so, then I was asking
| for the opposite where you would ingest the whole
| codebase and then let the model spit out the code. This
| in theory should enable the AI model to build a model of
| your code.
|
| > if we're all replaced by AI, who will write the
| frameworks and programming languages themselves?
|
| What for? There's enough programming languages and
| there's enough of the frameworks. How about using an AI
| model to maintain and develop existing complex codebases?
| IMHO if AI models become more sophisticated and are able
| to solve this, then the answer is pretty clear who will
| be doing it.
| Jcampuzano2 wrote:
| There are simply no models that can keep in context the
| amount of info required in enterprise codebases before
| starting to forget or hallucinate.
|
| I've tried to give it relevant context myself (a tedious
| task in itself to be honest) and even tools that claim to
| automatically be able to do so fail wonderfully at bigger
| than toy project size in my experience.
|
| The codebase I'm working on day to day at this moment is
| give or take around 800,000 lines of code and this isn't
| even close to our largest codebase since its just one
| client app for our monolith.
|
| Even trivial changes require touching many files. It
| would honestly take any average programmer less time to
| implement something themselves than trying to convince an
| LLM to complete it.
| menaerus wrote:
| The largest context that I am aware that an open-source
| model (e.g. qwen) can manage is 1M tokens. This should
| translate to ~30kLoC. I'd envision that this could in
| theory work even on large codebases. It certainly depends
| on the change to be done but I can imagine that ~30kLoC
| of context is large enough for most of the module-
| specific changes. Possibly the models that you're using
| have a much smaller context window?
|
| Then again, and I am repeating myself from other comments
| I made here in the topic, there's also Devon which pre-
| processes the codebase before you can do anything else.
| That kinda makes me wonder if current limitations that
| people observe in using those tools are really
| representative of what might be the current state of the
| art.
| Jcampuzano2 wrote:
| If you don't mind me asking, what size of codebases do
| you typically work on? As mentioned I've tried using all
| the available commercial models and none work better than
| as a helpful autocomplete, test, and utility function
| generator. I'm sure maybe big players like Meta, OpenAI,
| MS, etc do have the capability of expanding its context
| for their own internal projects and training specifically
| on their code, but most of the rest of us can't feasibly
| do that since we don't own our own AI moat.
|
| Even on my personal projects and smaller internal
| projects that are small toy projects or utility tools I
| sometimes struggle to get them to build anything
| significant. I'm not saying its impossible, but I always
| find it best at starting things from scratch, and small
| tools. Maybe its just a sign that AI would be best for
| microservices.
|
| I've never used Devon so I can't speak to it, but I do
| recall seeing it was also overhyped at best and struggled
| to do anything it was purported to be able to in demos.
| Not saying that this is still true.
|
| I would be interested in seeing how Devon performs on a
| large open source project in real-time (since if I recall
| their demos were not real-time demonstrations) for
| instance just to evaluate its capabilities.
| menaerus wrote:
| Several millions lines of code. Can't remember any
| project that I was involved with and that was less than
| 5MLoC. C++ system level programming.
|
| Overhyped or not Devon is using something else under the
| hood since it is pre-processing your whole codebase. It's
| not "realtime" since it simulates the CoT meaning that it
| "works" on the patch the very same way a developer would.
| and therefore it will give you a resulting PR in few
| hours AFAIR. I agree that a workable example on more
| complex codebase would be more interesting.
|
| > I've tried using all the available commercial models
| and none work better than as a helpful autocomplete,
| test, and utility function generator
|
| That's the why I mentioned qwen because I think
| commercial AI models do not have such a large window
| context size. Perhaps, therefore an experience would have
| been different.
| Jcampuzano2 wrote:
| And you have had luck with models like the one you
| mentioned and Devon generating significant amounts of
| code in these codebases? I would love to be able to have
| this due to the productivity gains it should allow but
| I've just never been able to demonstrate what the big AI
| coding services claim to be able to do at a large scale.
|
| What they already do is a decent productivity boost but
| not nearly as much as they claim to be capable of.
| menaerus wrote:
| As I already said in my first comment, I haven't used
| those models and any of them would have been forbidden at
| my work.
|
| My point was rather that you might be observing
| suboptimal results only because you haven't used the
| models which are more fit, at least hypothetically, for
| your use case.
| vunderba wrote:
| I've heard pretty mixed opinions about the touted
| capabilities of Devon.
|
| https://www.itpro.com/software/development/the-worlds-
| first-...
| ragle wrote:
| In a similar situation at my workplace.
|
| What models are you using that you feel comfortable
| trusting it to understand and operate on 10-20k LOC?
|
| Using the latest and greatest from OpenAI, I've seen output
| become unreliable with as little as ~300 LOC on a pretty
| simple personal project. It will drop features as new ones
| are added, make obvious mistakes, refuse to follow
| instructions no matter how many different ways I try to
| tell it to fix a bug, etc.
|
| Tried taking those 300 LOC (generated by o3-mini-high) to
| cursor and didn't fare much better with the variety of
| models it offers.
|
| I haven't tried OpenAI's APIs yet - I think I read that
| they accommodate quite a bit more context than the web
| interface.
|
| I do find OpenAI's web-based offerings extremely useful for
| generating short 50-200 LOC support scripts, generating
| boilerplate, creating short single-purpose functions, etc.
|
| Anything beyond this just hasn't worked all that well for
| me. Maybe I just need better or different tools though?
| Jcampuzano2 wrote:
| I usually use Claude 3.5 sonnett since its still the one
| I've had my best luck with for coding tasks.
|
| When it comes to 10k LOC codebases, I still don't really
| trust it with anything. My best luck has been small
| personal projects where I can sort of trust it to make
| larger scale changes, but larger scale at a small level
| in the first place.
|
| I've found it best for generating tests, autocompletion,
| especially if you give context via function names and
| parameter names I find it can oftentimes complete a whole
| function I was about to write using the interfaces
| available to it in files I've visited recently.
|
| But besides that I don't really use it for much outside
| of starting from scratch on a new feature or getting
| helping me with getting a plan together before starting
| working on something I may be unfamiliar with.
|
| We have access to all models available through copilot
| including o3 and o1, and access to chatgpt enterprise,
| and I do find using it via the chat interface nice just
| for architecting and planning. But I usually do the
| actual coding with help from autocompletion since it
| honestly takes longer to try to wrangle it into doing the
| correct thing than doing it myself with a little bit of
| its help.
| ragle wrote:
| This makes sense. I've mostly been successful doing these
| sorts of things as well and really appreciate the way it
| saves me some typing (even in cases where I only keep
| 40-80% of what it writes, this is still a huge savings).
|
| It's when I try to give it a clear, logical specification
| for a full feature and expect it to write everything
| that's required to deliver that feature (or the entirety
| of slightly-more-than-non-trivial personal project) that
| it falls over.
|
| I've experimented trying to get it to do this (for
| features or personal projects that require maybe 200-400
| LOC) mostly just to see what the limitations of the tool
| are.
|
| Interestingly, I hit a wall with GPT-4 on a ~300 LOC
| personal project that o3-mini-high was able to overcome.
| So, as you'd expect - the models are getting better.
| Pushing my use case only a little bit further with a few
| more enhancements, however, o3-mini-high similarly fell
| over in precisely the same ways as GPT-4, only a bit
| worse in the volume and severity of errors.
|
| The improvement between GPT-4 and o3-mini-high felt
| nominally incremental (which I guess is what they're
| claiming it offers).
|
| Just to say: having seen similar small bumps in
| capability over the last few years of model releases, I
| tend to agree with other posters that it feels like we'll
| need something revolutionary to deliver on a lot of the
| hype being sold at the moment. I don't think current LLM
| models / approaches are going to cut it.
| nyarlathotep_ wrote:
| I've found it very easy to end up "generating" yourself
| into a corner with a total mess with no clear control flow
| that ends up more convoluted than need be, by a mile.
|
| If you're in mostly (or totally) unfamiliar territory, you
| can end up in a mess, fast.
|
| I was playing around with writing a dead-simple websocket
| server in go the other evening and it generated some
| monstrosity with multiple channels (some unused?) and a
| tangle of goroutines etc.
|
| Quite literally copying the example from Gorilla's source
| tree and making small changes would have gotten me 90% of
| the way there, instead I ended up with a mostly opaque pile
| of code that *looks good* from a distance, but is barely
| functional.
|
| (This wasn't a serious exercise, I just wanted to see how
| "far" I could get with Copilot and minimal intervention)
| Jcampuzano2 wrote:
| Yeah I've found its good for getting something basic
| started from scratch, but often times if I try to
| iterate, it starts hallucinating very fast and forgetting
| what it was even doing after a short while.
|
| Newer models have gotten better at this and it takes
| longer before they start making things gibberish but all
| of them have their limit.
|
| And given the size of lots of enterprise codebases like
| the ones I'm working in, it just is too far away from
| being useful enough to replace many programmers in my
| opinion. I'm convinced the CEO's who are saying AI are
| replacing programmers are just using it as an excuse to
| downsize while getting investors happy.
| fesoliveira wrote:
| That is also my experience. I use ChatGPT to help me
| iterate a Godot game project, and it does not take more
| than a handful of prompts for it to forget or hallucinate
| about something we previously established. I need to
| constantly remind it about code it suggested a while ago
| or things I asked for in the past, or it completely
| ignores the context and focus just on the latest ask.
|
| It is incredibly powerful for getting things started, but
| as soon as you have a sketch of a complex system going it
| loses its grasp on the full picture and do not account
| for the states outside the small asks you make. This is
| even more evident when you need to correct it about
| something or request a change after a large prompt. It
| just throws all the other stuff out the window and
| hyperfocus only on that one piece of code that needs
| changing.
|
| This has been the case since GPT 3, the even their most
| recent model (forgot the name, the reasoning one) has
| this issue.
| WillPostForFood wrote:
| _the kinds of projects that software engineers typically work
| on for pay_
|
| This assumes a typical project is fairly big and complex.
| Maybe I'm biased the other way, but I'd guess 90% of software
| engineers are writing boilerplate code today that could be
| greatly assisted by LLM tools. E.g., PHP is still one of the
| top languages, which means a lot of basic WordPress stuff
| that LLMs are great at.
| lolinder wrote:
| The question isn't whether the code is complex
| algorithmically, the question is whether the code is:
|
| * Too large to fit in the useful context window of the
| model,
|
| * Filled with bunch of warts and landmines, and
|
| * Connected to external systems that are not self-
| documenting in the code.
|
| Most stuff that most of us are working on meets all three
| of these criteria. Even microservices don't help, if
| anything they make things worse by pulling the necessary
| context outside of the code altogether.
|
| And note that I'm not saying that the tools aren't useful,
| I'm saying that they're nowhere near good enough to be
| threatening to anyone's job.
| pgm8705 wrote:
| Yes. I think part of the problem is how good it is at starting
| from a blank slate and putting together an MVP type app. As a
| developer, I have been thoroughly impressed by this. Then non-
| devs see this and must think software engineers are doomed.
| What they don't see is how terrible LLMs are at working with
| complex, mature codebases and the hallucinations and endless
| feedback loops that go with that.
| idle_zealot wrote:
| The tech to quickly spin up MVP apps has been around for a
| while now. It gets you from a troubling blank slate to
| something with structure, something you can shape and build
| on.
|
| I am of course talking about npx
| create-{template name}
|
| Or your language of choice's equivalent (or git clone
| template-repo).
| hnthrow90348765 wrote:
| > Now completing small chunks of mundane code, explaining code,
| doing very small mundane changes. Very good at.
|
| This is the only current threat. The time you save as a
| developer using AI on mundane stuff will get filled by
| something else, possibly more mundane stuff.
|
| A small company with only 2-5 Seniors may not be able to drop
| anyone. A company with 100 seniors might be able to drop 5-10
| of them total, spread across each team.
|
| The first cuts will come at scaled companies. However, it's
| difficult to detect if companies are cutting people just to
| save money or if they are actually realizing any productivity
| gains from AI at this point.
| renegade-otter wrote:
| Especially since the zero-interest bonanza led to over-hiring
| of resume-driven developers. Half of AWS is torching energy
| by runnning some bloat that should not even be there.
| sumoboy wrote:
| I don't think companies realize AI is not free. A 100+ devs,
| openai, anthropic, gemini API costs, the hidden overhead of
| costs not spoken about.
|
| Too much speculation that productivity will increase
| substantially, especially when a majority of companies IT is
| just so broken and archaic.
| csmpltn wrote:
| I think that LLMs are only going to make people with real
| tech/programming skills much more in demand, as younger
| programmers skip straight into prompt engineering and never
| develop themselves technically beyond the bare minimum needed
| to glue things together.
|
| The gap between people with deep, hands-on experience that
| understand how a computer works and prompt engineers will
| become so insanely deep.
|
| Somebody needs to write that operating system the LLM runs on.
| Or your bank's backend system that securely stores your money.
| Or the mission critical systems powering this airplane you're
| flying next week... to pretend like this will all be handled by
| LLMs is so insanely out of touch with reality.
| whynotminot wrote:
| Isn't this kind of thing the story of tech though?
|
| Languages like Python and Java come around, and old-school C
| engineers grouse that the kids these days don't really
| understand how things work, because they're not managing
| memory.
|
| Modern web-dev comes around and now the old Java hands are
| annoyed that these new kids are just slamming NPM packages
| together and polyfills everywhere and no one understands Real
| Software Design.
|
| I actually sort of agree with the old C hands to some extent.
| I think people don't understand how a lot of things actually
| work. And it also doesn't really seem to matter 95% of the
| time.
| chucky_z wrote:
| $1 for the pencil, $1000 for the line.
|
| That's the 5% when it does matter.
| whynotminot wrote:
| Yes this is what people like to think. It's not really
| true in practice.
| shafyy wrote:
| Just because there are these abstractions layers that
| happened in the past does not mean that it will continue to
| happen that way. For example, many no-code tools promised
| just that, but they never caught on.
|
| I believe that there's a "optimal" level of abstraction,
| which, for the web, seems to be something like the modern
| web stack of HTML, JavaScript and some server-side language
| like Python, Ruby, Java, JavaScript.
|
| Now, there might be tools that make a developer's life
| easier, like a nice IDE, debugging tools, linters,
| autocomplete and also LLMs to a certain degree (which, for
| me, still is a fancy autocomplete), but they are _not_
| abstraction layers in that sense.
| neom wrote:
| I love that you brought no-code tools into this because I
| think it's interesting it never worked correctly.
|
| My guess is: on one side, things like squarespace and wix
| get super super good for building sites that don't feel
| like squarespace and wix, (I'm not sure I'd want to be a
| pure "website dev" right now - although I think
| squarespace squashed a lot of that long ago) - and then
| very very nice tooling for "real engineers" (whatever
| that means).
|
| I'm pretty handy with tech, I mean last time I built
| anything real was the 90s but I know how most things work
| pretty well. I sat down to ship an app last weekend, no
| sleep and Monday rolling around GCP was giving me errors
| and I hadn't realized one of the files the LLMs wrote
| looked like code but was all placeholder.
|
| I think this is basically what the anthropic report says,
| automation issues happen via displacement, and
| displacement is typically fine, except the displacement
| this time is happening very rapidly (I read in different
| report, expecting traditionally ~80 years of displacement
| happens in ~10 years with AI)
| zozbot234 wrote:
| Excel is a "no-code" system and people seem to like it.
| Of course, sometimes it tampers with your data in
| horrifying ways because something you entered (or
| imported into the system from elsewhere) just happened to
| look kinda like a date, even though it was intended to be
| something completely different. So there's that.
| MyOutfitIsVague wrote:
| Excel is hardly "no-code". Any heavy use of Excel I've
| seen uses formulas, which are straight-up code.
| sanderjd wrote:
| But any heavy use of "no-code" apps also ends up looking
| this way, with "straight-up code" behind many of the
| wysiwyg boxes.
| MyOutfitIsVague wrote:
| Right, but "no-code" implies something: programming
| without code. Excel is not that in any fashion. It's
| either programming with code or an ordinary spreadsheet
| application without code. You'd really have to stretch
| your definitions to consider it "no-code" in a way that
| wouldn't apply to pretty much any office application.
| marcosdumay wrote:
| > Excel is a "no-code" system and people seem to like it.
|
| If you've found any Excel guru that don't spend most of
| their time in VBA, you have a really unusual experience.
| woah wrote:
| Huge numbers of accountants and lawyers use excel heavily
| knowing only the built in formula language. They will
| have a few "gurus" sprinkled around who can write macros
| but this is used sparingly because the macros are a black
| box and make it harder to audit the financial models.
| yellowstuff wrote:
| I've worked in finance for 20 years and this is the
| complete opposite of my experience. Excel is ubiquitous
| and drives all sorts of business processes in various
| departments. I've seen people I would consider Excel
| gurus, in that they are able to use Excel much more
| productively than normal users, but I've almost never
| seen anyone use VBA.
| helge9210 wrote:
| Excel is a programming system with pure functions,
| imperative code (VBA/Python recently), database (cell
| grid, sheets etc.) and visualization tools.
|
| So, not really "no-code".
| ozim wrote:
| That's technically correct but it's also wrong.
|
| No-code in excel is that most functions are implemented
| for user and user doesn't have to know anything about
| software development to create what he needs and doesn't
| need software developer to do stuff for him.
| rmah wrote:
| I would disagree. Every formula you enter into a cell is
| "code". Moreover, more complex worksheets require VBA.
| fuy wrote:
| And also these old C hands don't seem to get paid
| (significantly) more than a regular web-dev who doesn't
| care about hardware, memory, performance etc. Go figure.
| jackcosgrove wrote:
| Pay is determined by the supply and demand for labor,
| which encompass many factors beyond the difficulty of the
| work.
|
| Being a game developer is harder than being an enterprise
| web services developer. Who gets paid more, especially
| per hour?
| SJC_Hacker wrote:
| And that last 5% is what you're paying for
| whynotminot wrote:
| But not really. Looking around my shop, I'm probably the
| only one around who used to write a lot of C code. No one
| is coming to ask me about obscure memory bugs. I'm
| certainly not getting paid better than my peers.
|
| The knowledge I have is personally gratifying to me
| because I like having a deeper understanding of things.
| But I have to tell you I thought knowing more would give
| me a deeper advantage than it has in actual practice.
| kmoser wrote:
| Is that because the languages being used at your shop
| have largely abstracted away memory bug issues? If you
| were to get a job writing embedded systems, or compilers,
| or OSes, wouldn't your knowledge be more highly valued
| and sought after (assuming you were one of the more
| senior devs)?
| abnercoimbre wrote:
| If you have genuine systems programming knowledge,
| usually the answer is to innovate on a particular
| toolchain or ship your own product (I understand you may
| not like business stuff though.)
| rootnod3 wrote:
| I would argue that your advantage right now is that YOU
| are the one position they can't replace with LLMs,
| because your knowledge requires exact fine detail on
| pointers and everything and needs that exact expertise.
| You might have toughen the same pay as your peers, but
| you also carry additional stability.
| gopher_space wrote:
| You're providing value every time you kill a bad idea
| "because things don't actually work that way" or shave a
| loop, you're just not tracking the value and neither is
| your boss.
|
| To your employer, hiring people who know things (i.e.
| you) has giving them a deeper advantage in actual
| practice.
| deadlast2 wrote:
| The programming is an interface to the machine. The AI even
| what we know now (LLM's, Agents, RAG) will absorb all that.
| It has many flaws but is still much better than most
| programmers.
|
| All future programmers will be using it.
|
| For the programmers that don't want to use it. I think
| there will be literally billions of lines of unbelievably
| bad code generated by these 1-100 generation Ai's and
| junior programmers that need to be corrected and fixed.
| SteveNuts wrote:
| > Languages like Python and Java come around, and old-
| school C engineers grouse that the kids these days don't
| really understand how things work
|
| Everything has a place, you most likely wouldn't write an
| HPC database in Python, and you wouldn't write a simple
| CRUD recipe app in C.
|
| I think the same thing applies to using LLMS, you don't use
| the code it generates to control a power plant or fly an
| airplane. You use it for building the simple CRUD recipe
| app where the stakes are essentially zero.
| bdhcuidbebe wrote:
| Yea, every progeammer should write at least a cpu emulator
| in their language of choice, its such a undervalued
| exercise that will teach you so much about how stuff really
| works.
| lizknope wrote:
| You can go to the next step. I studied computer
| engineering not computer science in college. We designed
| our own CPU and then implemented it in an FPGA.
|
| You can go further and design it out of discrete logic
| gates. Then write it in Verilog. Compare the differences
| and which made you think more about optimizations.
| chasd00 wrote:
| "in order to bake a pie you must first create the
| universe", at some point, reaching to lower and lower
| levels stops being useful.
| lizknope wrote:
| Sure.
|
| Older people are always going to complain about younger
| people not learning something that they did. When I
| graduated in 1997 and started working I remember some
| topics that were not taught but the older engineers were
| shocked I didn't know it from college.
|
| We keep creating new knowledge. It is impossible to fit
| everything into a 4 year curriculum without deemphasizing
| some other topic.
|
| I learned Motorola 68000 assembly language in college. I
| talked to a recent computer science graduate and he had
| never seen assembly before. I also showed him how I write
| static HTML in vi the same way I did in 1994 for my
| simple web site and he laughed. He showed me the back end
| to their web site and how it interacts with all their
| databases to generate all the HTML dynamically.
| SoftTalker wrote:
| When I was a kid I "wrote" (mostly copied from a
| programming magazine) a 4-bit CPU emulator on my
| TI-99/4a. Simple as it was, it was the light bulb coming
| on for me about how CPUs actually worked. I could then
| understand the assembly language books that had been
| impenetrable to me before. In college when I first
| started using "C", pointers made intuitive sense. It's a
| very valuable exercise.
| ragle wrote:
| I wonder about this too - and also wonder what the
| difference of order is between the historical shifts you
| mention and the one we're seeing now (or will see soon).
|
| Is it 10 times the "abstracting away complexity and
| understanding"? 100, 1000, [...]?
|
| This seems important.
|
| There must be some threshold beyond which (assuming most
| new developers are learning using these tools) fundamental
| ability to understand how the machine works and thus
| ability to "dive in and figure things out" when something
| goes wrong is pretty much completely lost.
| TOGoS wrote:
| > There must be some threshold beyond which (assuming
| most new developers are learning using these tools)
| fundamental ability to understand how the machine works
| and thus ability to "dive in and figure things out" when
| something goes wrong is pretty much completely lost.
|
| For me this happened when working on some Spring Boot
| codebase thrown together by people who obviously had no
| idea what they were doing (which maybe is the point of
| Spring Boot; it seems to encourage slopping a bunch of
| annotations together in the hope that it will do
| something useful). I used to be able to fix things when
| they went wrong, but this thing is just so mysterious and
| broken in such ridiculous ways that I can never seem to
| get to the bottom of it,
| sanderjd wrote:
| Notably, I don't think there was a mass disemployment of
| "old C hands". They just work on different things.
| commandlinefan wrote:
| My son is a CS major right now, and since I've been
| programming my whole adult life, I've been keeping an eye
| on his curriculum. They do still teach CS majors from the
| "ground up" - he took system architecture, assembly
| language and operating systems classes. While I kind of get
| the sense that most of them memorize enough to pass the
| tests and get their degree, I have to believe that they end
| up retaining some of it.
| SoftTalker wrote:
| Yes, they remember the concepts, mostly. Not the details.
| But that's often enough to help with reasoning about
| higher-level problems.
| whynotminot wrote:
| I think this is still true of a solid CS curriculum.
|
| But it's also true that your son will probably end up
| working with boot camp grads who didn't have that
| education. Your son will have a deeper understanding of
| the world he's operating in, but what I'm saying is that
| from what I've seen it largely hasn't mattered all that
| much. The bootcampers seem to do just fine for the most
| part.
| rcpt wrote:
| LLMs are a much bigger jump in productivity than moving to
| a high level language.
| o_nate wrote:
| At least for the type of coding I do, if someone gave me
| the choice between continuing to work in a modern high-
| level language (such as C#) without LLM assistance, or
| switching to a low-level language like C with LLM
| assistance, I know which one I would choose.
| throwaway0123_5 wrote:
| Likewise, under no circumstances would I trade C for LLM-
| aided assembly programming. That sounds hellish. Of
| course it could (probably will?) be the case that this
| may change at some point. Innovations in higher-level
| languages aren't returning productivity improvements at
| anywhere close to the same rate as LLMs are, and in any
| case LLMs probably benefit from improvements to higher-
| level languages as well.
| bee_rider wrote:
| The real hardcore experts should be writing libraries
| anyway, to fully take advantage of their expertise in a
| tiny niche and to amortize the cost of studying their
| subproblem across many projects. It has never been easier
| to get people to call your C library, right? As long as
| somebody can write the Python interface...
|
| Numpy has delivered so many FLOPs for BLAS libraries to
| work on.
|
| Does anyone really care if you call their optimized library
| from C or Python? It seems like a sophomoric concern.
| rootnod3 wrote:
| I think the problem is that with the over-reliance on
| LLMs, that expertise of writing the foundational
| libraries that even other languages rely on, is going
| away. That is exactly the problem.
| AnthonyMouse wrote:
| > Modern web-dev comes around and now the old Java hands
| are annoyed that these new kids are just slamming NPM
| packages together and polyfills everywhere and no one
| understands Real Software Design.
|
| The real issue here is that a lot of the modern tech stacks
| are _crap_ , but won for other reasons, e.g. JavaScript is
| a terrible language but became popular because it was the
| only one available in browsers. Then you got a lot of
| people who knew JavaScript so they started putting it in
| places outside the browser because they didn't want to
| learn another language.
|
| You get a similar story with Python. It's essentially a
| scripting language and poorly suited to large projects, but
| sometimes large projects _start out_ as small ones, or
| people (especially e.g. mathematicians in machine learning)
| choose a language for their initial small projects and then
| lean on it again because it 's what they know even when the
| project size exceeds what the language is suitable for.
|
| To slay these beasts we need to get languages that are
| actually good in general but also good at the things that
| cause languages to become popular, e.g. to get something
| better than JavaScript to be able to run in browsers, and
| to make languages with good support for large projects to
| be easier to use for novices and small ones, so people
| don't keep starting out in a place they don't want to end
| up.
| HarHarVeryFunny wrote:
| I don't think the value of senior developers is so much in
| knowing how more things work, but rather that they've
| learnt (over many projects of increasing complexity) how to
| design and build larger more complex systems, and this
| knowledge mostly isn't documented for LLMs to learn from.
| An LLM can do the LLM thing and copy designs it has seen,
| but this is cargo-cult behavior - copy the surface form of
| something without understanding why it was built that way,
| and when a different design would have been better for a
| myriad of reasons.
|
| This is really an issue for all jobs, not just software
| development, where there is a large planning and reasoning
| component. Most of the artifacts available to train an LLM
| on are the end result of reasoning, not the reasoning
| process themselves (the day by day, hour by hour, diary of
| the thought process of someone exercising their journeyman
| skills). As far as software is concerned, even the end
| result of reasoning is going to have very limited
| availability when it comes to large projects since there
| are relatively few large projects that are open source
| (things like Linux, gcc, etc). Most large software projects
| are commercial and proprietary.
|
| This is really one of the major weaknesses of LLM-as-AGI,
| or LLM-as-human-worker-replacement - their lack of ability
| to learn on the job and pick up a skill for themselves as
| opposed to needing to have been pre-trained on it (with the
| corresponding need for training data). In-context learning
| is ephemeral and anyways no substitute for weight updates
| where new knowledge and capabilities have been integrated
| with existing knowledge into a consistent whole.
| arrowsmith wrote:
| > I think that LLMs are only going to make people with real
| tech/programming skills much more in demand, as younger
| programmers skip straight into prompt engineering and never
| develop themselves technically beyond the bare minimum needed
| to glue things together.
|
| This is exactly what the article says in point 3.
| threetonesun wrote:
| It's the "CTO's nephew" trope but at 100x the cost.
| bdhcuidbebe wrote:
| Yea, i agree fully.
|
| real programming of course wont go away. But in the public
| eye it lost its mysticism as seemingly anyone can code now.
| Of course that aint true and noone managed to create anything
| of substance by prompting alone.
| weatherlite wrote:
| How do we define real programming? I'm working on python
| and JS codebases in my startup. So very high level stuff.
| However to reason well about everything that goes on in our
| code is no small feat for an LLM (or a human), if its able
| to take our requirements , understand the business logic
| and just start refactoring and creating new features on a
| codebase that is quite big, well yeah, that sounds like AGI
| to me. In that case I don't see why it won't be able to
| hack on kernels.
| skydhash wrote:
| The fact that you don't see why is the issue. Both python
| and JS are very permissive and their runtime env is very
| good. More often than not, you're just dealing with logic
| bugs and malformed domain data. A kernel codebase like
| Linux is one where there are many motivated individual
| trying every trick to get the computer to do something.
| And you're usually dealing with leaner abstractions
| because general safety logic is not performant enough.
| It's a bit like the difference between a children
| playground and a construction site.
| weatherlite wrote:
| > More often than not, you're just dealing with logic
| bugs
|
| Definitely. More often than not you're dealing with logic
| bugs. So the thing solving them will sometimes have to be
| able to reason quite well across large code bases (not
| every bug of course, but quite often) to the point I
| don't really see how it's different than general
| intelligence if it can do that well. And if it gets to
| the point its AGIish , I don't see why it can't do Kernel
| work (or in the very least - reduce the amount of jobs
| dramatically in that space as well). Perhaps you can
| automate 50% of the job where we're not really thinking
| at all as programmers, but the other 50% (or less, or
| more, debatable) involves planning, reasoning, debugging,
| thinking. Even if all you do is python and js.
| skydhash wrote:
| > _So the thing solving them will sometimes have to be
| able to reason quite well across large code bases_
|
| The codebase only describes _what_ the software can do
| currently, never the _why_. And you can 't reason without
| both. And the _why_ is the primary vector of changes
| which may completely redefines the _what_. And even the
| _what_ have many possible interpretations. The code is
| only one specific _how_. Going from the _why_ , to the
| _what_ , to a specific _how_ is the core tenet of
| programming. Then you add concerns like performance,
| reliability, maintainability, security...
|
| Once you have a mature codebase, outside of refactoring
| and new features, you mostly have to edit a few lines for
| each task. Finding the lines to work one requires careful
| investigation and you need to carefully test after that
| to ensure that no other operations have been affected. We
| already have good deterministic tools to help with that.
| throwaway0123_5 wrote:
| I agree with this. An AI that can fully handle web dev is
| clearly AGI. Maybe the first AGI can't fully handle OS
| kernel development, just as many humans can't. But
| if/once AGI is achieved it seems _highly_ unlikely to me
| that it will stay at the "can do web dev but can't do OS
| kernel dev" level for very long.
| hombre_fatal wrote:
| I think we who are already in tech have this gleeful fantasy
| that new tools impair newcomers in a way that will somehow
| serve us, the incumbents, in some way.
|
| But in reality pretty much anyone who enters software starts
| off cutting corners just to build things instead of working
| their way up from nand gates. And then they backfill their
| knowledge over time.
|
| My first serious foray into software wasn't even Ruby. It was
| Ruby on Rails. I built some popular services without knowing
| how anything worked. There was always a gem (lib) for it. And
| Rails especially insulated the workings of anything.
|
| An S3 avatar upload system was `gem install carrierwave` and
| then `mount_uploader :avatar, AvatarUploader`. It added an
| avatar <input type="file"> control to the User form.
|
| But it's not satisfying to stay at that level of ignorance
| very long, especially once you've built a few things, and you
| keep learning new things. And you keep wanting to build
| different things.
|
| Why wouldn't this be the case for people using LLM like it
| was for everyone else?
|
| It's like presuming that StackOverflow will keep you as a
| question-asker your whole life when nobody here would relate
| to that. You get better, you learn more, and you become the
| question-answerer. And one day you sheepishly look at your
| question history in amazement at how far you've come.
| thechao wrote:
| I think you're right; I can see it in the accelerating
| growth curve of my _good_ Junior devs; I see grandOP 's
| vision in my _bad_ Junior devs. Optimistically, I think
| this gives more jr devs more runway to advance deeper into
| more sophisticated tech stacks. I think we 're gonna need
| more SW devs, not fewer, as these tools get better: things
| that were previously impossible will be possible.
| zozbot234 wrote:
| > more sophisticated tech stacks
|
| Please don't do this, pick more boring tech stacks
| https://news.ycombinator.com/item?id=43012862 instead.
| "Sophisticated" tech stacks are a huge waste, so please
| save the sophisticated stuff for the 0.1% of the time
| where you actually need it.
| marcosdumay wrote:
| Sophistication doesn't imply any increase or decrease in
| "boringness".
| zozbot234 wrote:
| The dictionary definition of 'sophisticated' is "changed
| in a deceptive or misleading way; not genuine or pure;
| unrefined, adulterated, impure." Pretty much the polar
| opposite of "boring" in a technology context.
| rcxdude wrote:
| That is an extremely archaic definition that's pretty far
| from modern usage, especially in a tech context
| whstl wrote:
| No, this is not "the" dictionary definition.
|
| This definition is obsolete according to Wikitionary:
| https://en.wiktionary.org/wiki/sophisticated (Wikitionary
| is the first result that shows when I type your words)
| Timwi wrote:
| No clue what dictionary you looked at but this is not at
| all what dictionaries actually say.
| sophacles wrote:
| That's great advice when you're building a simple CRUD
| app - use the paved roads for the 10^9th instance.
|
| It's terrible advice when you're building something that
| will cause that boring tech to fall over. Or when you've
| reached the limits of that boring tech and are still
| growing. Or when the sophisticated tech lowers CPU usage
| by 1% and saves your company millions of dollars. Or when
| that sophisticate tech saves your engineers hours and
| your company 10s of millions. Or just: when the boring
| tech doesn't actually do the things you need it to do.
| zozbot234 wrote:
| "Boring" tech stacks tend to be highly scalable in their
| own right - certainly more so than the average of trendy
| newfangled tech. So what's a lot more likely is that the
| trendy newfangled tech will fail to meet your needs and
| you'll be moving to some even newer and trendier tech, at
| surprisingly high cost. The point of picking the "boring"
| choice is that it keeps you off that treadmill.
| sophacles wrote:
| I'm not disagreeing with anything you said here - reread
| my comment.
|
| Sometimes you want to use the sophisticated shiny new
| tech because you actually need it. Here's a recent
| example from a real situation:
|
| The linux kernel (a boring tech these days) has a great
| networking stack. It's choking on packets that need to be
| forwarded, and you've already tuned all the queues and
| the cpu affinities and timers and polling. Do you -
|
| a) buy more servers and network gear to spread your
| packets across more machines? (boring and expensive and
| introduces new ongoing costs of maintenance, datacenter
| costs, etc).
|
| b) Write a kernel module to process your packets more
| efficiently? (a boring, well known solution, introduces
| engineer costs to make and maintain as well as downtime
| because the new shiny module is buggy?)
|
| c) Port your whole stack to a different OS (risky, but
| choosing a different boring stack should suffice... if
| youre certain that it can handle the load without kernel
| code changes/modules).
|
| d) Write a whole userspace networking system (trendy and
| popular - your engineers are excited about this,
| expensive in eng time, risks lots of bugs that are
| already solved by the kernel just fine, have to re-invent
| a lot of stuff that exists elsewhere)
|
| e) Use ebpf to fast path your packets around the kernel
| processing that you don't need? (trendy and popular -
| your engineers are excited about this, inexpensive
| relative to the other choices, introduces some new bugs
| and stability issues til the kinks are worked out)
|
| We sinned and went with (e). That new-fangled tech met
| our needs quite well - we still had to buy more gear but
| far less than projected before we went with (e). We're
| actually starting to reach limits of ebpf for some of our
| packet operations too so we've started looking at (d)
| which has come down in costs and risk as we understand
| our product and needs better.
|
| I'm glad we didn't go the boring path - our budget wasn't
| eaten up with trying to make all that work and we could
| afford to build features our customers buy instead.
|
| We also use postgres to store a bunch of user data. I'm
| glad we went the boring path there - it just works and we
| don't have to think about it, and that lack of attention
| has afforded us the chance to work on features customers
| buy instead.
|
| The point isn't "don't choose boring". It's: blindly
| choosing boring instead of evaluating your actual needs
| and options from a knowledgeable place is unwise.
| zozbot234 wrote:
| None of these seem all that 'trendy' to me. The real
| trendy approach would be something like leaping directly
| to a hybrid userspace-kernelspace solution using
| something like
| https://github.com/CloudNativeDataPlane/cndp and/or the h
| ttps://www.kernel.org/doc/html/latest/networking/af_xdp.h
| tm... addressing that the former is built on. Very
| interesting stuff, don't get me wrong there - but hardly
| something that can be said to have 'stood the test of
| time' like most boring tech has. (And I would include
| things like eBPF in that by now.)
| sophacles wrote:
| I have similar examples from other projects of using
| io_uring and af_xdp with similar outcomes. In 2020 when
| the ebpf decision was made it was pretty new an trendy
| still too... in a few cases each of these choices
| required us to wait for deployment until some feature we
| chose to depend on landed in a mainline kernel. Things
| move a bit slower that far down the stack so new doesn't
| mean "the js framework of the week", but it's still the
| trendy unproven thing vs the well-known path.
|
| The point is still: evaluate the options for real - using
| the new thing because it's new and exicting is equally as
| foolish as use the boring thing because it's well-
| proven... if those are your main criteria.
| itronitron wrote:
| Today I learned that some tech stacks are sophisticated,
| I suppose those are for the _discerning_ developer.
| gorjusborg wrote:
| > I think we're gonna need more SW devs, not fewer
|
| Code is a liability. What we really care about is the
| outcome, not the code. These AI tools are great at
| generating code, but are they good at maintaining the
| generated code? Not from what I've seen.
|
| So there's a good chance we'll see people using tools to
| generate a ton of instant legacy code (because nobody in
| house has ever understood it) which, if it hits
| production, will require skilled people to figure out how
| to support it.
| kmoser wrote:
| We will see both: lots of poor code, lots of neutral code
| (LLMs cranking out reasonably well written boilerplate),
| and even some improved code (by devs who use LLMs to
| ferret out inefficiencies and bugs in their existing,
| human-written codebase).
|
| This is no different from what we see with any tool or
| language: the results are highly dependent on the
| experience and skills of the operator.
| ivanbalepin wrote:
| > Code is a liability
|
| This is so true! Actual writing of the code is such a
| small step in overall running of a typical
| business/project, and the less of it the better.
| duderific wrote:
| > Code is a liability
|
| Another way I've seen this expressed, which resonates
| with me, is "All code is technical debt."
| thomasfromcdnjs wrote:
| I agree with this stance. Junior developers are going to
| learn faster than previous generations, and I'm happy for
| it.
|
| I know that is confronting for a lot of people, but I
| think it is better to accept it, and spend time thinking
| about what your experience is worth. (A lot!)
| zahlman wrote:
| >Junior developers are going to learn faster than
| previous generations, and I'm happy for it.
|
| I would have agreed, until I started seeing the kinds of
| questions they're asking.
| MattGaiser wrote:
| Or assuming software needs to be of a certain quality.
|
| Software engineers 15 years ago would have thought it crazy
| to ship a full browser with every desktop app. That's now
| routine. Wasteful? Sure. But it works. The need for low
| level knowledge dramatically decreased.
| lolinder wrote:
| > Why wouldn't this be the case for people using LLM like
| it was for everyone else?
|
| I feel like it's a bit different this time because LLMs
| aren't just an abstraction.
|
| To make an analogy: Ruby on Rails serves a similar role as
| highways--it's a quick path to get where you're going, but
| once you learn the major highways in a metro area you can
| very easily break out and explore and learn the surface
| streets.
|
| LLMs are a GPS, not a highway. They tell you what to do and
| where to go, and if you follow them blindly you will not
| learn the layout of the city, you'll just learn how to use
| the GPS. I find myself unable to navigate a city by myself
| until I consciously force myself off of Google Maps, and I
| don't find that having used GPS directions gives me a leg
| up in understanding the city--I'm starting from scratch no
| matter how many GPS-assisted trips I've taken.
|
| I think the analogy helps both in that the weaknesses in
| LLM coding are similar and _also_ that it 's not the end of
| the world. I don't need to know how to navigate most cities
| by memory, so most of the time Google Maps is exactly what
| I need. But I need to recognize that leaning on it too much
| for cities that I really do benefit from knowing by heart
| is a problem, and intentionally force myself to do it the
| old-fashioned way in those cases.
| abeppu wrote:
| I think also a critical weakness is that LLMs are trained
| on the code people write ... and our code doesn't
| annotate what was written by a human and what was
| suggested by a tool. In your analogy, this would be like
| if your sat nav system suggests that you turn right where
| other people have turned right ... because they were
| directed to turn by their sat nav.
| thedanbob wrote:
| In fact, I'm pretty sure this already happens and the
| results are exactly what you'd expect. Some of the
| "alternate routes" Google Maps has suggested for me in
| the past are almost certainly due to other people making
| unscheduled detours for gas or whatever, and the
| algorithm thinks "oh this random loop on a side street is
| popular, let's suggest it". And then anyone silly enough
| to follow the suggestion just adds more signal to the
| noise.
| thaumasiotes wrote:
| Long before any use of LLMs, OsmAnd would direct you, if
| you were driving past Palo Alto, to take a congested
| offramp to the onramp that faced it across the street.
| There is no earthly reason to do that; just staying on
| the freeway is faster and safer.
|
| So it's not obvious to me that patently crazy directions
| must come from watching people's behavior. Something else
| is going on.
| smackeyacky wrote:
| In Australia the routes seem to be overly influenced by
| truck drivers, at least out of the cities. Maps will
| recommend you take some odd town bypass when just going
| down Main Street is easier.
|
| I imagine what you saw is some other frequent road users
| making choices that get ranked higher.
| therein wrote:
| > if you were driving past Palo Alto, to take a congested
| offramp to the onramp that faced it across the street
|
| If you're talking about that left turn into Alma with the
| long wait instead of going into the Stanford roundabout
| and then the overpass, it still does that.
| RicoElectrico wrote:
| OsmAnd doesn't use traffic data. You can enable traffic
| map layers by feeding a reverse-engineered URL, though.
| thaumasiotes wrote:
| I'm not talking about use of traffic data. In the
| abstract, assuming you are the only person in the world
| who owns a car, that route would be a very bad
| recommendation. Safety concerns would be lower, but
| still, there's no reason you'd ever do that.
| ahi wrote:
| Google Maps has some strange feedback loops. I frequently
| drive across the Bay Bridge to Delaware beaches. There
| are 2 or 3 roughly equal routes with everyone going to
| the same destination. Google will find a "shorter" route
| every 5 minutes. Naturally, Maps is smart enough to
| detect traffic, but not smart enough to equally
| distribute users to prevent it. It creates a traffic jam
| on route A, then tells all the users to use route B which
| causes a jam there, and so on.
| zahlman wrote:
| It hadn't even occurred to me that there are places where
| enough people are using Google Maps while driving to
| cause significant impact on traffic patterns. Being car-
| free (and smartphone-free) really gives a different
| perspective.
| conaclos wrote:
| This is also problematic in cases where navigation apps
| are not updated and drivers start taking routes they are
| no longer authorized to take.
| pishpash wrote:
| That already happens. Maps directs you to odd nonsense
| detours somewhat frequently now, that you get better
| results by overriding the machine. It's going down the
| way of web search.
| thaumasiotes wrote:
| > It's going down the way of web search.
|
| This is an interesting idea. There's an obvious force
| directing search to get worse, which is the adversarial
| desire of certain people to be found.
|
| But no such force exists for directions. Why would they
| be getting worse?
| fuzzzerd wrote:
| Probably my cynicism but its the more stores you drive
| past the more likely you are to stop off and buy
| something.
| adityamwagh wrote:
| Exactly! This is an amazing observation and analogy.
| startupsfail wrote:
| For now LLMs in coding are more like an LLM version of a
| GPS, not the GPS itself.
|
| Like imagine you'd ask turn-by-turn directions from an
| LLM and then follow these directions ;). That's how it
| feels when LLMs are used for coding.
|
| Sometimes amazing, sometimes generated code is a swamp of
| technical debt. Still, a decade ago it was completely
| impossible. And the sky is the limit.
| jddj wrote:
| > LLMs are a GPS, not a highway.
|
| I love these analogies and I think this one is apt.
|
| To adapt another which I saw here to this RoR thread, if
| you're building furniture then LLMs are powertools while
| frameworks are ikea flatpacks.
| bloomingkales wrote:
| It's best the analogy of the month. I don't think cab
| drivers today are the same cab drivers that knew the city
| by heart of the past.
|
| So, it's been a privilege gentlemen, writing apps from
| scratch with you.
|
| _walks off the programming Titanic with a giant violin_
| nuancebydefault wrote:
| The problem is now that the LLM GPS will lead you to the
| wrong place once a day on average, and then you still
| need either open the map and study where you are and
| figure out the route, or refine the destination address
| and pray it will bring you to the correct place. Such a
| great analogy!
| cozzyd wrote:
| Try asking your GPS for the Western blue line stop on the
| Chicago L. (There are two of them and it will randomly
| pick one)
| epcoa wrote:
| What is "your GPS" meant here. With Google Maps and Apple
| Maps it consistently picks the closest one (this being
| within minutes to both but much closer to one), which
| seems reasonable. Maybe not ideal as when either of these
| apps will bring up a disambiguation for a super market
| chain or similar, but I'm not witnessing randomness.
| nuancebydefault wrote:
| To be clear, above i was talking about LLMs. Randomness
| in real GPS usage is something I have never encountered
| in using Google maps already since 15 years or so. 99
| percent of the time it brings/brought me exactly where i
| want to be, even around road works or traffic jams. It
| seems some people have totally different experiences, so
| odd.
| cozzyd wrote:
| Perhaps they have improved their heuristic for this one,
| though perhaps it was actually Uber/Lyft that randomly
| picks one when given as a destination...
| gausswho wrote:
| Strangely this reminds me of exactly how you would
| navigate in parts of India before the Internet became
| ubiquitous.
|
| The steps were roughly: Ask a passerby how to get where
| you want to go. They will usually confidently describe
| the steps, even if they didn't speak your language.
| Cheerfully thank them and proceed to follow the
| directions. After a block or two, ask a new passerby.
| Follow their directions for a while and repeat. Never
| follow the instructions fully. This triangulation served
| to naturally fill out faulty guidance and hucksters.
|
| Never thought that would one day remind me of
| programming.
| nuancebydefault wrote:
| Indeed. My experience in India is that people are
| friendly and helpful and try to help you in a very
| convincing way, even so when they don't know the answer.
| Not so far off LLM user experience.
| bgoated01 wrote:
| I'm the kind of guy who decently likes maps, and I pay
| attention to where I'm going and also to the map before,
| during, and after using a GPS (Google maps). I do benefit
| from Google maps in learning my way around a place. It
| depends on how you use it. So if people use LLMs to code
| without trying to learn from it and just copy and paste,
| then yeah, they're not going to learn the skills
| themselves. But if they are paying attention to the
| answers they are getting from the LLMs, adjusting things
| themselves, etc. then they should be able to learn from
| that as well as they can from online code snippets,
| modulus the (however occasional) bad examples from the
| LLM.
| JumpCrisscross wrote:
| > _LLMs are a GPS, not a highway. They tell you what to
| do and where to go_
|
| It still gives you code you can inspect. There is no
| black box. Curious people will continue being curious.
| lolinder wrote:
| The code you can inspect is analogous to directions on a
| map. Some have noted in this thread that for them
| directions on a map actually do help them learn the
| territory. I have found that they absolutely do not help
| me.
|
| That's not for lack of curiosity, it seems to be
| something about the way that I'm wired that making
| decisions about where to navigate helps me to learn in a
| way that following someone else's decisions does not.
| JumpCrisscross wrote:
| You have to study the map to learn from it. Zoom in and
| out on surroundings, look up unfamiliar landmarks, _et
| cetera_. If you just follow the GPS or copy paste the
| code no, you won't learn.
| zahlman wrote:
| The problem is that coders taking this approach are
| dominantly ones who lack the relevant skill - ones who
| are taking that approach _because_ they lack that skill.
| Ma8ee wrote:
| The ones that until now copied and pasted everything from
| Stack Overflow.
| Capricorn2481 wrote:
| > But it's not satisfying to stay at that level of
| ignorance very long
|
| That's the difference. This is how you feel because you
| like programming to some extent. Having worked closely with
| them, I can tell you there are many people going into
| bootcamps that flat out dislike programming and just heard
| it pays well. Some of them get jobs, but they don't want to
| learn anything. They just want to do as much that doesn't
| get them fired. They are not curious even with tasks they
| are supposed to do.
|
| I don't think this is inherently wrong, as I don't feel
| like gatekeeping the profession if their bosses feel they
| add value. But this is a classic case of losing the junior
| > expert pipeline. We could easily find ourselves in a spot
| in 30 years where AI coding is rampant but there's no
| experts to actually know what it does.
| ytpete wrote:
| There have been people entering the profession for
| (purported) money and not love of the craft for at least
| as long as the 20 years I've been in it. So long as there
| are _also_ people who still genuinely enjoy it and take
| pride in doing the job well, then the junior- >expert
| pipeline isn't lost.
|
| I buy that LLMs may shift the proportion of those two
| camps. But doubt it will really eliminate those who
| genuinely love building things with code.
| unyttigfjelltol wrote:
| > But in reality pretty much anyone who enters software
| starts off cutting corners just to build things instead of
| working their way up from nand gates.
|
| The article is right in a zoomed-in view (fundamental
| skills will be rare and essential), but in the big picture
| the critique in the comment is better (folks rarely start
| on nand gates). Programmers of the future will have less
| need to know code syntax the same way current programmers
| don't have to fuss with hardware-specific machine code.
|
| The people who still do hardware-specific code, are they
| currently in demand? The marketplace is smaller, so results
| will vary and probably like the article suggests, be less
| satisfactory for the participant with the time-critical
| need or demand.
| jayd16 wrote:
| That's fine and all but I'm not sure the nand-gate folks
| are out of a job either.
| geodel wrote:
| Great points. I see my journey from an offshore application
| support contractor to full time engineer and learning a lot
| along the way. Along the journey I've seen folks who held
| good/senior engineering roles just stagnated or moved to
| management role.
|
| Industry is now large enough to have all sort of people.
| Growing, stagnating, moving out, moving in, laid off,
| retiring early, or just plain retiring etc.
| amanda99 wrote:
| I agree, and I also share your experience (guess I was a
| bit earlier with PHP).
|
| I think what's left out though is that this is the
| experience of those who are really interested and for whom
| "it's not satisfying" to stay there.
|
| As tech has turned into a money-maker, people aren't doing
| it for the satisfaction, they are doing it for the money.
| That appears to cause more corner cutting and less learning
| what's underneath instead of just doing the quickest fix
| that SO/LLM/whatever gives you.
| tgv wrote:
| > And then they backfill their knowledge over time.
|
| If only. There are too many devs who've learnt to write JS
| or Python, and simply won't change. I've seen one case
| where someone ported an existing 20k C++ app to a browser
| app in the most unsuitable way with emscripten, where a
| 1100 lines of typescript do a much better job.
| britzkopf wrote:
| Who the hell, in today's market, is going to hire an
| engineer with a tenuous grasp on foundational technological
| systems, with the hope that one day they will backfill?!
| csmpltn wrote:
| > "But it's not satisfying to stay at that level of
| ignorance very long"
|
| It's not about satisfaction: it's literally dangerous and
| can bankrupt your employer, cause immense harm to your
| customers and people at home, and make you unhirable as an
| engineer.
|
| Let's take your example of "an S3 avatar upload system",
| which you consider finished after writing 2 lines of code
| and a couple of packages installed. What makes sure this
| can't be abused by an attacker to DDOS your system, leading
| to massive bills from AWS? What happens after an attacker
| abuses this system and takes control of your machines? What
| makes sure those avatars are "safe-for-work" and legal to
| host in your S3 bucket?
|
| People using LLMs and feeling all confident about it are
| the equivalent of hobby carpenters after watching a DIY
| video on YouTube and building a garden shed over the
| weekend. You're telling me they're now qualified to go
| build buildings and bridges?
|
| > "It's like presuming that StackOverflow will keep you as
| a question-asker your whole life when nobody here would
| relate to that."
|
| I meet people like this during job interviews all of the
| time, if I'm hiring for a position. Can't tell you how many
| people with 10+ years of industry experience I met recently
| that can't explain how to read data from a local file, from
| the machine's file system.
| askonomm wrote:
| Difference here being that you actually learned the
| information about Ruby on Rails, whereas the modern
| programmer doesn't learn anything. They are but a
| clipboard-like vessel that passes information from an LLM
| onto a text editor, rarely ever actually reading and
| understanding the code. And if something doesn't work, they
| don't debug the code, they debug the LLM for not getting it
| right. The actual knowledge here never gets stored in the
| brain, making any future learning or evolving impossible.
|
| I've had to work with developers that are over dependent on
| LLM's, one didn't even know how to undo code, they had to
| ask an LLM to undo. Almost as if the person is a zombie or
| something. It's scary to witness. And as soon as you ask
| them to explain their rationale for the solution they came
| up with - dead silence. They can't, because they never
| actually _thought_.
| Kerrick wrote:
| Difference here being that you actually learned the
| information about computers, whereas the modern
| programmer doesn't learn anything. They are but a typist-
| like vessel that passes information from an architect
| onto a text editor, rarely ever actually reading and
| understanding the compiled instructions. And if something
| doesn't work, they don't debug the machine code, they
| complain about the compiler for not getting it right. The
| actual knowledge here never gets stored in the brain,
| making any future learning or evolving impossible.
|
| I've had to work with developers that are over dependent
| on high-level languages. One didn't even know how to
| trace execution in machine code; they had to ask a
| debugger. Almost as if the person is a zombie or
| something. It's scary to witness. And as soon as you
| explain them to explain their memory segmentation
| strategy - dead silence. They can't, because they never
| actually _thought_.
| floatrock wrote:
| Abstractions on top of abstractions on top of turtles...
|
| It'll be interesting to see what kinds of new tools come
| out of this AI boom. I think we're still figuring out
| what the new abstraction tier is going to be, but I don't
| think the tools to really work at that tier have been
| written yet.
| askonomm wrote:
| Touche.
| zahlman wrote:
| No, it really isn't at all comparable like that (and
| other discussion in the thread makes it clear why). Users
| of high-level languages clearly still do _write code_ in
| those languages, that comes out of their own thought
| rather than e.g. the GoF patterns book. They don 't just
| complain about compilers; they actually do debug the
| high-level code, based on the compiler's error messages
| (or, more commonly, runtime results). When people get
| their code from LLMs, however, you can see very often
| that they have no idea how to proceed when the code is
| wrong.
|
| Debugging is a skill anyone can learn, which applies
| broadly. But some people just _don 't_. People who want
| correct code to be written for them are fundamentally
| asking something different than people who want writing
| correct code to be easier.
| sevensor wrote:
| > I think we who are already in tech have this gleeful
| fantasy that new tools impair newcomers in a way that will
| somehow serve us, the incumbents, in some way.
|
| Well put. There's a similar phenomenon in industrial
| maintenance. The "grey tsunami." Skilled electricians,
| pipefitters, and technicians of all stripes are aging out
| of the workforce. They're not being replaced, and instead
| of fixing the pipeline, many factories are going out of
| business, and many others are opting to replace equipment
| wholesale rather than attempt repairs. Everybody loses,
| even the equipment vendors, who in the long run have fewer
| customers left to sell to.
| sterlind wrote:
| At present, LLMs are basically Stack Overflow with infinite
| answers on demand... of Stack Overflow quality and
| relevance. Prompting is the new Googling. It's a critical
| base skill, but it's not sufficient.
|
| The models I've tried aren't that great at algorithm
| design. They're abysmal at generating highly specific,
| correct code (e.g. kernel drivers, consensus protocols,
| locking constructs.) They're good plumbers. A lot of
| programming is plumbing, so I'm happy to have the help, but
| they have trouble doing actual computer science.
|
| And most relevantly, they currently don't scale to large
| codebases. They're not autonomous enough to pull a work
| item off the queue, make changes across a 100kloc codebase,
| debug and iterate, and submit a PR. But they can help a lot
| with each individual _part_ of that workflow when focused,
| so we end up in the perverse situation where junior devs
| act as the machine 's secretary, while the model does most
| of the actual programming.
|
| So we end up de-skilling the junior devs, but the models
| still can't replace the principal devs and researchers, so
| where are the principal devs going to come from?
| zahlman wrote:
| >The models I've tried aren't that great at algorithm
| design. They're abysmal at generating highly specific,
| correct code (e.g. kernel drivers, consensus protocols,
| locking constructs.) They're good plumbers. A lot of
| programming is plumbing, so I'm happy to have the help,
| but they have trouble doing actual computer science.
|
| I tend towards tool development, so this suggests a
| fringe benefit of LLMs to me: if my users are asking LLMs
| to help with a specific part of my API, I know that's the
| part that sucks and needs to be redesigned.
| zahlman wrote:
| >Why wouldn't this be the case for people using LLM like it
| was for everyone else?
|
| Because of the mode of interaction.
|
| When you dive into a framework that provides a ton of
| scaffolding, and "backfill your knowledge over time"
| (guilty! using Nikola as a SSG has been my entry point to
| relearn modern CSS, for example), you're forced to proceed
| by creating your own loop of experimentation and research.
|
| When you interact with an LLM, and use forums to figure out
| problems the LLM didn't successfully explain to you (about
| its own output), you're in chat mode the whole time. Even
| if people are willing to teach you to fish, they won't
| voluntarily start the lesson, because you haven't shown any
| interest in it. And the fish are all over the place - for
| now - so why would you want to learn?
|
| >It's like presuming that StackOverflow will keep you as a
| question-asker your whole life when nobody here would
| relate to that.
|
| Of course nobody on HN would relate to that first-hand. But
| as someone with extensive experience curating Stack
| Overflow, I can assure you I have seen it second-hand many
| times.
| weatherlite wrote:
| There's no need for tens of millions of OS Kernel devs , most
| of us are writing business logic CRUD apps.
|
| Also, it's not entirely clear to me why LLMs should get
| extremely good in web app development but not OS development,
| as far as I can see it's the amount and quality of training
| data that counts.
| wesselbindt wrote:
| > as far as I can see it's the amount and quality of
| training data that counts
|
| Well there's your reason. OS code is not as in demand or
| prevalent as crud web app code, so there's less relevant
| data to train your models on.
| woah wrote:
| The OS code that exists is much higher quality so the
| signal to noise ratio is much better
| wesselbindt wrote:
| I think arguably there's still a quantity issue, but I'm
| no expert on LLMs. Plus I hear the windows source code is
| a bit of a nightmare. But for every windows there's a
| TempleOS I suppose.
| eitally wrote:
| I agree. It's the current generation's version of what
| happened with the advent of Javascript frameworks about 15
| years ago, when suddenly web devs stopped learning how
| computers actually work. There will always be high demand for
| software engineers who actually know what they're doing, can
| debug complex code bases, and can make appropriate decisions
| about how to apply technology to business problems.
|
| That said, AI agents are absolutely going to put a bunch of
| lower end devs out of work in the near term. I wouldn't want
| to be entering the job market in the next couple of years....
| mixmastamyk wrote:
| > There will always be high demand for software engineers
| who actually know what they're doing
|
| Unfortunately they won't be found due to horrible tech
| interviews focused on "culture" (*-isms), leetcode under
| the gun, or resume thrown in trash at first sight from lack
| of full degree. AMHIK.
| chasd00 wrote:
| > I wouldn't want to be entering the job market in the next
| couple of years....
|
| I bet there's a software dev employment boom about 5 years
| away once it becomes obvious competent people are needed to
| unwind and rework all the llm generated code.
| atlintots wrote:
| Except juniors are not going to be the competent people
| you're looking for to unwind those systems. Personally,
| no matter how it plays out, I feel like the entry-level
| market in this field is going to take a hit. It will
| become much more difficult and competitive.
| wesselbindt wrote:
| > Or the mission critical systems powering this airplane
| you're flying next week... to pretend like this will all be
| handled by LLMs is so insanely out of touch with reality.
|
| Airplane manufacturers have proved themselves more than
| willing to sacrifice safety for profits. What makes you think
| they would stop short of using LLMs?
| AlexCoventry wrote:
| > people with real tech/programming skills much more in
| demand, as younger programmers skip straight into prompt
| engineering
|
| Perhaps Python will become the new assembly code. :-)
| x86hacker1010 wrote:
| You think this like newcomers can't use the LLM to more
| deeply understand these topics in addition to glue. This
| mindset is a fallacy as newcomers are more adept and
| passionate as any other generation. They have better tools
| and can compete just the same.
| tharne wrote:
| > Or the mission critical systems powering this airplane
| you're flying next week... to pretend like this will all be
| handled by LLMs is so insanely out of touch with reality.
|
| Found the guy who's never worked for a large publicly-traded
| company :) Do you know what's out of touch with reality?
| Thinking that $BIG_CORP execs who are compensated based on
| the last three months of stock performance will do anything
| other than take shortcuts and cut corners given the chance.
| SoftTalker wrote:
| We've been in this world for decades.
|
| Most developers couldn't write an operating system to save
| their life. Most could not write more than a simple SQL
| query. They sling code in some opinionated dev stack that
| abstracts the database and don't think too hard about the
| low-level details.
| rapind wrote:
| > Somebody needs to write that operating system the LLM runs
| on. Or your bank's backend system that securely stores your
| money. Or the mission critical systems powering this airplane
| you're flying next week... to pretend like this will all be
| handled by LLMs is so insanely out of touch with reality.
|
| When they do this, I really want to know they did this. Like
| an organic food label. Right now AI is this buzzword that
| companies self-label with for marketing, but when that
| changes, I still want to see who's using AI to handle my
| data.
| dartos wrote:
| I don't think "prompt engineering" will remain its own field.
|
| It's just modern SEO and SEO will eat it, eventually.
|
| Prompt engineering as a service makes more sense than having
| on-staff people anyway, since your prompt's effectiveness can
| change from model to model.
|
| Have someone else deal with platform inconsistencies, like
| always.
| ljm wrote:
| The enshittification will come for the software engineers
| themselves eventually, because so many businesses out there
| only have their shareholders in mind and not their customers,
| and if a broken product or a promise of a product is enough
| to boost the stock then why bother investing in the talent to
| build it properly?
|
| Look at Google and Facebook - absolute shithouse services now
| that completely fail to meet the basic functionality they had
| ~20 years ago. Google still rakes in billions rendering ads
| in the style of a search engine and Facebook the same for
| rendering ads in the format of a social news feed. Why even
| bother with engineering anything except total slop?
| hintymad wrote:
| Would technical depth change the fundamental supply and
| demand, though? If we view AI as a powerful automation tool,
| it's possible that the overall demand will be lowered so much
| that the demand of the deep technical expertise will go down
| as well. Take EE industry, for instance, the technical
| expertise required to get things done is vast and deep, yet
| the demand has not been so good, compared to the software
| industry.
| shortrounddev2 wrote:
| > younger programmers skip straight into prompt engineering
| and never develop themselves technically beyond the bare
| minimum needed to glue things together
|
| This was true before LLMs though. A lot of people just glue
| javascript libraries together
| KerrAvon wrote:
| yup. the good news is this should make interviewing easier;
| bad news is there'll be fewer qualified candidates.
|
| the other thing, though, is that you and I know that LLMs
| can't write or debug operating systems, but the people who
| pay us and see LLMs writing prose and songs? hmm
| efitz wrote:
| On a recent AllIn podcast[1], there was a fascinating
| discussion between Aaron Levie and Chamath Palihapitiya about
| how LLMs will (or will not) supplant software developers,
| which industries, total addressable markets (TAMs), and
| current obstacles preventing tech CEOs from firing all the
| developers right now. It seemed pretty obvious to me that
| Chamath was looking forward to breaking his dependence on
| software developers, and predicts AI will lead to a 90%
| reduction in the market for software-as-a-service (and the
| related jobs).
|
| Regardless of point of view, it was an eye opening discussion
| to hear a business leader discussing this so frankly, but I
| guess not so surprising since most of his income these days
| is from VC investments.
|
| [1] https://youtu.be/hY_glSDyGUU?t=4333
| devoutsalsa wrote:
| I hired a junior developer for a couple months and was
| incredibly impressed with what he was able to accomplish with
| a paid ChatGPT subscription on a greenfield project for me.
| He'd definitely struggle with a mature code base, it you have
| to start somewhere!
| foota wrote:
| I think I've seen the comparison with respect to training
| data, but it's interesting to think of the presence of LLMs
| as a sort of barrier to developing skills akin to pre-WW2 low
| background radiation steel (which, fun fact, isn't actually
| that relevant anymore, since background radiation levels have
| dropped significantly since the partial end of nuclear
| testing)
| brightball wrote:
| One of my first bosses was a big Perl guy. I checked on what
| he was doing 15 years later and he was one of 3 people at
| Windstream handling backbone packet management rules.
|
| You just don't run into many people comfortable with that
| technology anymore. It's one of the big reasons I go out of
| my way to recruit talks on "old" languages to be included at
| the Carolina Code Conference every year.
| benatkin wrote:
| When I see people trying to define which programmers will
| enjoy continued success as AI continues to improve, I often
| see One True Scotsman used.
|
| I wish more would try to describe what the differentiating
| skills and circumstances are instead of just saying that real
| programmers should still be in demand.
|
| I think maybe raw talent is more important than how much you
| "genuinely love coding"
| (https://x.com/AdamRackis/status/1888965636833083416) or how
| much of a _real programmer_ you are. This essay captures raw
| talent pretty well IMO:
| https://www.joelonsoftware.com/2005/07/25/hitting-the-
| high-n...
| zahlman wrote:
| >I think that LLMs are only going to make people with real
| tech/programming skills much more in demand, as younger
| programmers skip straight into prompt engineering and never
| develop themselves technically beyond the bare minimum needed
| to glue things together.
|
| My experience with Stack Overflow, the Python forums, etc.
| etc. suggests that we've been there for a year or so already.
|
| On the one hand, it's revolutionary that it works at all (and
| I have to admit it works better than "at all").
|
| But when it doesn't work, a significant fraction of those
| users will try to get experienced humans to fix the problem
| for them, for free - while also deluding themselves that
| they're "learning programming" through this exercise.
| trod1234 wrote:
| It is far more likely that everything, and not just IT, but
| everything collapses than we make it to the point you
| mention.
|
| LLMs replace entry level people who invested in education.
| They would have the beginning knowledge, but there's no means
| to become better because opportunities are non-existent
| because they replaced these positions. Its a sequential
| pipeline failure of talent development. In the meantime you
| have the mid and senior level people who cannot pass their
| knowledge on, they age out, and die.
|
| What happens when you hit a criticality point where
| production which is dependent on these systems, and it can no
| longer continue.
|
| The knowledge implicit in production is lost, the economic
| incentives have been poisoned. The distribution systems are
| destroyed.
|
| How do you bootstrap recovery for something that effectively
| took several centuries to build in the first place, but not
| in centuries but in weeks/months.
|
| If this isn't sufficient enough to explain the core of the
| issue. Check out the Atari/Nintendo crash, which isn't nearly
| as large as this but goes into the dangers of destroying your
| distributor networks.
|
| If you pay attention to the details, you'll see Atari's crash
| was fueled by debt financing, and in the process they
| destroyed their distributor networks with catastrophic
| losses. After that crash, Nintendo couldn't get shelf-space;
| no distributor would risk the loss without a guarantee. They
| couldn't advertise as video games. They had to trojan horse
| the perception of what they were selling, and guarantee it.
| There is a documentary on Amazon which covers this, playing
| with power. Check it out.
| renegade-otter wrote:
| AI will create more jobs, if anything, as the "engineers" out
| of their depth create massive unmaintainable legacy.
| OccamsMirror wrote:
| It's Access databases all over again.
| DanielHB wrote:
| The only thing I use it for is for small self-contained
| snippets of code on problems that require use of APIs I don't
| quite remember out of the top of my head. The LLM spits out the
| calls I need to make or attributes/config I need to set and I
| go check the docs to confirm.
|
| Like "How to truncate text with CSS alone" or "How to set an
| AWS EC2 instance RAM to 2GB using terraform"
| bwfan123 wrote:
| A "causal model" is needed to fix bugs ie, to "root-cause" a
| bug.
|
| LLMs yet dont have the idea of a causal-model of how something
| works built-in. What they do have is pattern matching from a
| large index and generation of plausible answers from that
| index. (aside: the plausible snippets are of questionable
| licensing lineage as the indexes could contain public code with
| restrictive licensing)
|
| Causal models require machinery which is symbolic, which is
| able to generate hypotheses and test and prove statements about
| a world. LLMs are not yet capable of this and the fundamental
| architecture of the llm machine is not built for it.
|
| Hence, while they are a great productivity boost as a semantic
| search engine, and a plausible snippet generator, they are not
| capable of building (or fixing bugs in) a machine which
| requires causal modeling.
| fiso64 wrote:
| >Causal models require machinery which is symbolic, which is
| able to generate hypotheses and test and prove statements
| about a world. LLMs are not yet capable of this and the
| fundamental architecture of the llm machine is not built for
| it.
|
| Prove that the human brain does symbolic computation.
| bwfan123 wrote:
| We dont know what the human brain does, but we know it can
| produce symbolic theories or models of abstract worlds (in
| the case of math) or real worlds (in the case of science).
| It can also produce the "symbolic" turing machine which
| serves as an abstraction for all computation we use
| (cpu/gpu/etc)
| tarkin2 wrote:
| Sorry for inadvertently advising but I met a guy who used
| v0.dev to make impressive websites (although admittedly he did
| use react before so he was experienced) with professional
| success. It's more than arguable that his company will
| fire/hire fewer devs. Of course in a decade or so they'll be a
| skill gap unless LLMs can fill that gap too.
| anavat wrote:
| My take is that AI's ability to generate new code will prove so
| valuable, it will not matter if it is bad at changing existing
| code. And that the engineers of the distant future (like, two
| years from now) will not bother to read the generated code, as
| long as it runs and passes the tests (which will also be AI-
| generated).
|
| I try to use AI daily, and every month I see how it is able to
| generate larger and more complex chunks of code from the first
| shot. It is almost there. We just need to adopt the new
| paradigm, build the tooling, and embrace the new weird future
| of software development.
| bdhcuidbebe wrote:
| > I try to use AI daily
|
| You should reflect on the consequences of relying too much on
| it.
|
| See https://www.404media.co/microsoft-study-finds-ai-makes-
| human...
| anavat wrote:
| I don't buy it makes me dumber. It just makes me worse at
| some things I used to do before, while making better at
| some other things. Often times it doesn't feel like coding
| anymore, more like if I were training to be a lawyer or
| something. But that's my bet.
| weatherlite wrote:
| I agree but too many serious people are hinting we are very
| close I can't ignore it anymore. Sure, when Sam Altman /
| Zuckerberg say we're close I don't know if I can believe him
| because obviously the dudes will say anything to sell/pump the
| stock price. But how about Demis Hassabis ? He doesn't strike
| me like that at all. Same for Geoff Hinton, Bengio and a couple
| of others.
| bdhcuidbebe wrote:
| Market hype is all it is.
| layer8 wrote:
| People investing their lives in the field are inherently
| biased. This is not to diminish them, it's just a fact of the
| matter. Nobody knows how general intelligence really works,
| nor even how to reliably test for it, so it's all
| speculation.
| RivieraKid wrote:
| I'm surprised to see a huge disconnect between how I perceive
| things and the vast majority of comments here.
|
| AI is obviously not good enough to replace programmers today.
| But I'm worried that it will get much better at real-world
| programming tasks within years or months. If you follow AI
| closely, how can you be dismissive of this threat? OpenAI will
| probably release a reasoning-based software engineering agent
| this year.
|
| We have a system that is similar to top humans at competitive
| programming. This wasn't true 1 year ago. Who knows what will
| happen in 1 year.
| layer8 wrote:
| When I see stuff like
| https://news.ycombinator.com/item?id=42994610 (continued in
| https://news.ycombinator.com/item?id=42996895), I think the
| field still has fundamental hurdles to overcome.
| lordswork wrote:
| This kind of error doesn't really matter in programming
| where the output can be verified with a feedback loop.
| layer8 wrote:
| This is not about the numerical result, but about the way
| it reasons. Testing is a sanity check, not a substitute
| for reasoning about program correctness.
| tmnvdb wrote:
| Why do you think this is a fundamental hurdle, rather than
| just one more problem that can be solved? I dont have
| strong evidence either way, but I've seen a lot of
| 'fundamental unsurmountable problems' fall by the wayside
| over the past few years. So I'm not sure we can be that
| confident that a problem like this, for which we have very
| good classic algorithms, is a fundamental issue.
| Imanari wrote:
| https://tinyurl.com/mrymfwwp
|
| We will see, maybe models do get good enough but I think we
| are underestimating these last few percent of improvement.
| johnnyanmac wrote:
| It's the opposite. I don't think it'll replace programmers
| legitimately within a decade. I DO think that companies will
| try a lot in the months and years anyway and that programmers
| will be the only ones suffering the consequences of such
| actions.
| cejast wrote:
| Nobody can tell you whether progress will continue at
| current, faster or slower rates - humans have a pretty
| terrible track record at extrapolating current events into
| the future. It's like how movies in the 80's made predictions
| about where we'll be in 30 years time. Back to the Future
| promised me hoverboards in 2015 - I'm still waiting!
| tmnvdb wrote:
| Compute power increases and algorithmic efficiency
| improvements have been rapid and regular. I'm not sure why
| you thought that Back to the Future was a documentary film.
| cejast wrote:
| Unless you have a crystal ball there is nothing that can
| give you certainty that will continue at the same or
| better rate. I'm not sure why you took the second half of
| the comment more seriously than the first.
| tmnvdb wrote:
| Nobody has certainty about the future. We can only look
| at what seems most likely given the data.
| tmnvdb wrote:
| People somehow have expectations that are both too high and
| too low at the same time. They expect (demand) current
| language models completely replace a human engineer in any
| field without making mistakes (this is obviously way too
| optimistic) while at the same time they are ignoring how
| rapid the progress has been and how much these models can now
| do that seemed impossible just 2 years ago, delivering huge
| value when used well, and they assume no further progress
| (this seems too pessimistic, even if progres is not
| guaranteed to continue at the same rate).
| mirsadm wrote:
| ChatGPT 4 was released 2 years ago. Personally I don't
| think things have moved on significantly since then.
| yojat661 wrote:
| Exactly. I have been waiting for gpt5 to see the delta,
| but after gpt4 things seemed to have stalled.
| nitwit005 wrote:
| It's a bit paradoxical. A smart enough AI, and there is no
| point in worrying, because almost everyone will be out of a
| job.
|
| The problem case is the somewhat odd scenario where there is
| an AI that's excellent at software dev, but not most other
| work, and we all have to go off and learn some other trade.
| ge96 wrote:
| A friend of mine reached out with some code ChatGPT wrote for
| him to trade crypto. It had so much random crap in it and lines
| would say "AI enhanced trading algo" and it was just an
| np.randomint line. It was pulling in random deps not even used.
|
| I get it though like I'm terrible working with IMUs and I want
| to just get something going but I can't there's that wall I
| need to overcome/learn eg. the math behind it. Same with
| programming helps to have the background knowing how to read
| code and how it works.
| HDThoreaun wrote:
| I used claude to help write a crypto trading bot. It helped
| me push out thousands of lines a day. What wouldve taken
| months took a couple weeks. Obviously you still need
| experienced pilots but unless we find an absolute fuckload of
| new work to do(not unlikely looking at history) its hard for
| me to see anything other than way less developers being
| needed.
| jillesvangurp wrote:
| I think of it as an enabler that reduces my dependency on
| junior developers. Instead of delegating simple stuff to them,
| I now do it myself with about the same amount of overhead (have
| to explain what I want, have to triple check the results) on my
| side but less time wasted on their end.
|
| A lot of micro managing is involved either way. And most LLMs
| suffer from a severe case of ground hog day. You can't assume
| them to remember anything over time. Every conversation starts
| from scratch. If it's not in your recent context, specify it
| again. Etc. Quite tedious but it still beats me doing it
| manually. For some things.
|
| For at least the next few years, it's going to be an
| expectation from customers that you will not waste their time
| with stuff they could have just asked an LLM to do for them.
| I've had two instances of non technical CPO and CEO types
| recently figuring out how to get a few simple projects done
| with LLMs. One actually is tackling rust programs now. The
| point here is not that that's good code but that neither of
| them would have dreamed about doing anything themselves a few
| years ago. The scope of the stuff you can get done quickly is
| increasing.
|
| LLMs are worse at modifying existing code than they are at
| creating new code. Every conversation is a new conversation.
| Ground hog day, every day. Modifying something with a lot of
| history and context requires larger context windows and tools
| to fill those. The tools are increasingly becoming the
| bottleneck. Because without context the whole thing derails and
| micromanaging a lot of context is a chore.
|
| And a big factor here is that huge context windows are costly
| so there's an incentive for service providers to cut some
| corners there. Most value for me these days come from LLM tool
| improvements that result in me having to type less. "fix this"
| now means "fix the thing under my cursor in my open editor,
| with the full context of that file". I do this a lot since a
| few weeks.
| belter wrote:
| > Now completing small chunks of mundane code, explaining code,
| doing very small mundane changes. Very good at.
|
| I would not trust them until they can do the news properly.
| Just read the source Luke.
|
| "AI chatbots unable to accurately summarise news, BBC finds" -
| https://www.bbc.com/news/articles/c0m17d8827ko
| strangescript wrote:
| As context sizes get larger (and remain accurate within the
| size) and speeds increase, especially inference, it will start
| solving these large complex code bases.
|
| I think people lose sight of how much better it has gotten in
| just a few years.
| hintymad wrote:
| > It's incredibly far away from doing any significant change in
| a mature codebase
|
| A lot of the use cases are on building something that has
| already been built before, like a web app, a popular algorithm,
| and etc. I think the real threat to us programmers is
| stagnation. If we don't have new use cases to develop but only
| introduce marginal changes, then we can surely use AI to
| generate our code from the vast amount of previous work.
| giancarlostoro wrote:
| They all sounds like crypto bros talking about AI. It's really
| frustrating to talk to them, just like crypto bros.
| moogly wrote:
| They're the same people in my experience.
| giancarlostoro wrote:
| Its the same energy for sure.
| deeviant wrote:
| The huge disconnect is that the skill set to use LLMs for code
| effectively is not the same skill set of standard software
| engineering. There is a very heavy intersection and I would say
| you cannot be effective at LLM software development without
| being an effective software engineer, but being an effective
| software engineer does not by any means make somebody good at
| LLM development.
|
| Very talented engineers, coworkers, that I would place above
| myself in skill, seemed stumped by it, while I have realized at
| least a 10x productively gain.
|
| The claim that LLMs are not being applied in mature, complex
| code-bases is pure fantasy, example:
| https://arxiv.org/abs/2501.06972. Here Google is using LLMs to
| accelerate the migration of mature, complex production systems.
| agentultra wrote:
| I'm more keen on formal methods to do this than LLMs. I take
| the view that we need more precise languages that require us to
| write less code that obviously has no errors in it. LLMs are
| primed to generate more code using less precise specifications;
| resulting in code that has no obvious errors in it.
| micromacrofoot wrote:
| I think you're discounting efficiency gains -- through a series
| of individually minor breakthroughs in LLM tech I think we
| could end up with things like 100M+ token context windows
|
| We've already seen this sort of incrementalism over the past
| couple of years, the initial buzz started without much more
| than a 2048 context window and we're seeing models with 1M out
| there now that are significantly more capable.
| __MatrixMan__ wrote:
| I think it's more likely that we'll see a rise in workflows
| that AI is good at, rather than AI rising to meet the
| challenges of our more complex workflows.
|
| Let the user pair with an AI to edit and hot-reload some subset
| of the code which needs to be very adapted to the problem
| domain, and have the AI fine-tuned for the task at hand. If
| that doesn't cut it, have the user submit issues if they need
| an engineer to alter the interface that they and the AI are
| using.
|
| I guess this would resemble how myspace used to do it, where
| you'd get a text box where you could provide custom edits, but
| you couldn't change the interface.
| rs186 wrote:
| I use AI coding assistants daily, and whenever there is a task
| that those tools cannot do correctly/quickly enough so that I
| need to fallback to editing things by myself, I spend a bit of
| time thinking what is so special about the tasks.
|
| My observation is that LLMs do repetitive, boring tasks really
| well, like boilerplate code and common logic/basic UI that
| thousands of people have already done. Well, in some sense,
| jobs where developers who spend a lot of time writing generic
| code is already at risk of being outsourced.
|
| The tasks that need a ton of tweaking or not worth asking AI at
| all are those that are very specific to a specific product and
| need to meet specific requirements that often come from
| discussions or meetings. Well, I guess in theory if we had
| transcripts for everything, AI could write code like the way
| you want, but I doubt that's happening any time soon.
|
| I have since become less worried about the pace AI will replace
| human programmers -- there is still a lot that these tools
| cannot do. But for sure people need to watch out and be aware
| of what's happening.
| xp84 wrote:
| > It's incredibly far away from doing any significant change in
| a mature codebase.
|
| I agree with this completely. However the problem that I think
| the article gets at is still real because junior engineers also
| can't do significant changes on a mature codebase when they
| first start out. They used to do the 'easy stuff' which freed
| the rest of us up to do bigger stuff. But:
|
| 1. Companies like mine don't hire juniors anymore
|
| 2. With Copilot I can be so much more productive that I don't
| need juniors to do "the easy stuff" because Copilot can easily
| do that in 1/1000th the time a junior would.
|
| 3. So now who is going to train those juniors to get to the
| level where we need them to be to make those "significant
| changes"?
| hypothesis wrote:
| > So now who is going to train those juniors to get to the
| level where we need them to be to make those "significant
| changes"?
|
| Founders will cash out long before that becomes an issue.
| Alternatively, the hype is true and they will obsolete
| programmers, also solving the issue above...
|
| This is quite devious if you think about it, withering
| pipeline of new devs and only them having an immediate fix in
| all cases.
| outworlder wrote:
| > I'm thinking there's going to have to be some other
| breakthrough or something other than LLM's.
|
| We actually _need_ a breakthrough for the promises to
| materialize, otherwise we will have yet another AI Winter.
|
| Even though there seems to be some emergent behavior (some
| evidence that LLMs can, for example, create an internal chess
| representation by themselves when asked to play), that's not
| enough. We'll end up with diminishing returns. Investors will
| get bored of waiting and this whole thing comes crashing down.
|
| We'll get an useful too in our toolbox, as we do at every AI
| cycle.
| darepublic wrote:
| I dunno if it's always good at explaining code. It tends to
| take everything at face value and is unable to opinionatedly
| reject bs when it's presented with it. Which in the majority of
| cases is bad.
| bodegajed wrote:
| this is also my problem. When I ask someone a technical
| question, and I did not provide context on some abstractions.
| Usually this is common because abstractions can be very deep.
| "Hmm, not sure.. can you check what's this supposed to do?"
|
| LLMs don't do this, it confidently hallucinate the
| abstraction out of thin air or uses their outdated knowledge
| store. Sending wrong use or wrong input parameters.
| scotty79 wrote:
| > We know what it is good at and what it's not.
|
| We know what it's good at today. And pretty sure it won't be
| any worse at it in the future. And 5 years ago state of the art
| was basically output of Markov Chain. In 5 years we might be at
| another place entirely.
| necovek wrote:
| Agreed, and I haven't yet seen any single instance of a company
| firing software engineers because AI is replacing them (even if
| by increasing productivity of another set of software
| engineers): I've asked this a number of times, and while it's a
| common refrain, I haven't really seen any concrete news report
| saying it in so many words.
|
| And to be honest, if any company is firing software engineers
| hoping AI replaces their production, that is good news since
| that company will soon stop existing and treating engineers
| like shit which it probably did :)
| dartos wrote:
| Marketing is really good at their job.
|
| That coupled with new money and retail investors being thinking
| they're in a gold rush and you get the environment we're in.
| lofaszvanitt wrote:
| Time to wake up and think in terms of 10-20 years ahead. Everyone
| around NVIDIA dies out... anyone with GPU compute ideas just
| cannot succeed... 3Dfx full of saboteurs that hinder their
| progress.
|
| Open source takes away the livelihood of programmers and gives it
| to moneymen for free. They used open source to train AI models.
| Programmers got back a few stars and a pat in the back. And some
| recognition, but mostly nothing. All this while big corps use
| their work without compensation. There is zero compensation
| options for open source programmers on github. Somehow it's left
| out.
|
| Same bullshit comes up again and again in different forms. Like
| your ideas worth nothing blablabla. Suuure, but moneymen usually
| have zero ideas and they like to expropriate others' ideas, FOR
| FREE. While naive people give away their ideas and work for free,
| the other side gives back nooothiiiing.
|
| It's already too late.
|
| So programmers and other areas that will be aiified in the coming
| decades will be slowly going extinct. AI is a skill appropriation
| device that in the long term will make people useless, so they
| don't need an artist, a musician etc. They will just need a
| capable AI to create whatever they want, without the hassle of
| the human element. It's the ultimate control tool to make people
| SLAVES.
|
| Hope I'm wrong.
| nerder92 wrote:
| This article is entirely built on 2 big and wrong assumptions:
|
| 1. AI code ability will be the same as is today
|
| 2. Companies will replace people for AI en masse at a given
| moment in time
|
| Of course both these assumptions are wrong, the quality of code
| produced by AI will improve dramatically as model evolves. And is
| not even just the model itself. The tooling, the Agentic
| capabilities and workflow will entirely change to adapt to this.
| (Already doing)
|
| The second assumption is also wrong, intelligent companies will
| not layoff en masse to use AI only, they will most likely slow
| hiring devs because their existing enhanced devs using AI will
| suffice enough to their coding related needs. At the end of the
| day product is just one area of company development, build the
| complete e2e ultimate solution with 0 distribution or marketing
| will not help.
|
| This article, in my opinion, is just doomerism storytelling for
| nostalgic programmers, that see programming only as some kind of
| magical artistic craft and AI as the villain arrived to remove
| all the fun from it. You can still switch off Cursor and write
| donut.c if you enjoy doing it.
| Madmallard wrote:
| What makes you think (1) will be true?
|
| It is only generating based on training data. In mature code
| bases there is a massive amount of interconnected state that is
| not already present in any github repository. The new logic
| you'd want to add is likely something never done before. As
| other programmers have stated, it seems to be improving at
| generating useful boilerplate and making simple websites and
| such related to what's out there en masse on Github. But it
| can't make any meaningful changes in an extensively matured
| codebase. Even Claude Sonnet is absolutely hopeless at this.
| And the requirement before the codebase is "matured" is not
| very high.
| nerder92 wrote:
| > It is only generating based on training data
|
| This is not the case anymore, current SOTA CoT models are not
| just parroting stuff from the training data. And as of today
| they are not even trained exclusively on publicly (and not so
| publicly) available stuff, but they massively use synthetic
| data which the model itself generated or distilled data from
| other smarter models.
|
| I'm using and I know plenty of people using AI in current
| "mature" codebases with great results, this doesn't mean it
| does the work while you sip a coffee (yet)
|
| *NOTE: my evidence for this is that o3 could not break ARC
| AGI by parroting, because it's a banchmark made exactly for
| this reason. Not a coding banchmark per se, but still
| transposable imo.
| fragmede wrote:
| Try Devin or OpenHands. OpenHands isn't quite ready for
| production, but it's informative on where things are going
| and to watch the LLM go off and "do stuff", kinda on its
| own, from my prompt (while I drink coffee).
| ryandrake wrote:
| > The new logic you'd want to add is likely something never
| done before.
|
| 99% of software development jobs are not as groundbreaking as
| this. It's mostly companies doing exactly what their
| competitors are doing. Very few places are actually doing
| things that an LLM model has truly never seen crawling
| through GutHub. Even new innovative products generally boil
| down to the same database fetches and CRUD glue and JSON
| parsing and front end form filling code.
| SpicyLemonZest wrote:
| Groundbreakingness is different from the type of novelty
| that's relevant to an LLM. The script I was trying to write
| yesterday wasn't groundbreaking at all: it just needed to
| pull some code from a remote repository, edit a specific
| file to add a hash, then run a command. But it had to do
| that _within our custom build system_, and there's few
| examples of that, so our coding assistant couldn't figure
| out how to do it.
| skydhash wrote:
| > _Even new innovative products generally boil down to the
| same database fetches and CRUD glue and JSON parsing and
| front end form filling code._
|
| The simplest version of that is some CGI code a PHP script.
| Which everyone should be writing according to your
| description. But why so many books have been written to be
| able to do this seemingly simple task? So many frameworks,
| so many patterns, so many methodologies....
| causal wrote:
| Not to mention I haven't really seen AI replace anyone, except
| perhaps as a scapegoat for execs who were planning on layoffs
| anyway.
|
| That said, I do think there is real risk of letting AI hinder
| the growth of Junior dev talent.
| sanderjd wrote:
| I think I've seen us be able to do more with fewer people
| than in the past. But that isn't the limiting factor for our
| hiring. All else equal, we'd like to just do _more_ , when we
| can afford to hire the people. There isn't a fixed amount of
| work to be done. We have lots of ideas for products and
| services to make if we have the capacity.
| causal wrote:
| Agreed, I often see AI discussed as if most companies
| wouldn't take 10x more developers if they could have them
| for free
| badgersnake wrote:
| > the quality of code produced by AI will improve dramatically
| as model evolves
|
| This is the incorrect assumption, or at least there's no
| evidence to support it.
| nerder92 wrote:
| If benchmark means anything in evaluating how model
| capability progress, the evidence is that all the existing
| benchmark have been pretty much solved, except FrontierMath
| (https://epoch.ai/frontiermath)
| shihab wrote:
| I recently saw Sam Altman bragging about OpenAI's
| performance on Codeforces (leetcode-like website), which I
| consider just about the worst benchmark possible.
|
| 1. All problems are small- the prompt and solution (<100
| LOC, often <60LOC)
|
| 2. Solving those problems is more about recollecting
| patterns and less about good new insights. Now, top level
| human competitors do need original thinking, but that's
| only because our memory is too small to store all
| previously seen patterns.
|
| 3. Unusually good dataset- you have tens of thousands of
| problems, each with thousands of submissions, along with
| clear signals to train on (right/wrong, time taken etc), a
| very rich discussion sections etc.
|
| I think becoming 100th best Codeforces programmer is still
| an incredible achievement for a LLM. But for Sam Altman to
| specifically note the performance on this- I consider that
| a sign of weakness, not strength.
| badgersnake wrote:
| Altman spouts even more bullshit than his models, if
| that's even possible.
| layer8 wrote:
| Given that I've seen excellent mathematicians produce poor-
| quality code in real-world software projects, I'm not sure
| how relevant these benchmarks are.
| mrguyorama wrote:
| Benchmarks cannot tell you whether the tech will continue
| supernaturally, linearly, or plateau entirely.
|
| In the 90s, companies showed graphs of CPU frequency and
| projected we would be hitting 8ghz pretty soon. Futurists
| predicted we would get CPUs running at tens of ghz.
|
| We only just now have 5ghz CPUs despite running at 4ghz
| back in the mid 2000s.
|
| We fundamentally missed an important detail that wasn't
| consider at all in those projections.
|
| We know less about the theory of how LLMs and neural
| networks grow with effort than we did about how transistors
| operate over different speeds.
|
| You utterly cannot extrapolate from those kinds of graphs.
| high_na_euv wrote:
| >Of course both these assumptions are wrong, the quality of
| code produced by AI will improve dramatically as model evolves.
|
| How are you so sure?
| dev1ycan wrote:
| he's not, just another delusional venture capitalist that
| hasn't bothered to look up the counter arguments to his point
| of view, done by mathematicians
| randmeerkat wrote:
| > he's not, just another delusional venture capitalist that
| hasn't bothered to look up the counter arguments to his
| point of view, done by mathematicians
|
| Don't hate on it, just spin up some startup with "ai" and
| LLM hype. Juice that lemon.
| iamleppert wrote:
| When life gives you delusional VC's that don't understand
| the basics of what they are investing in, make lemonade!
| Very, very expensive lemonade!
| johnnyanmac wrote:
| Sadly my goals are more more ephemeral than opening up a
| lemonade stand.
| anothermathbozo wrote:
| It's an emergent technology and no one knows for certain
| how far it can be pushed, not even mathematicians.
| RohMin wrote:
| I do feel with the rise of "reasoning" class of models, it's
| not hard to believe that code quality will improve over time.
| high_na_euv wrote:
| The thing is: how much
|
| 0.2x, 2x, 5x, 50x?
| RohMin wrote:
| Who knows? It just needs to be better than the average
| engineer.
| high_na_euv wrote:
| The thing is that this "just" may not happen soon
| hiatus wrote:
| It needs to be better than the average engineer whose
| abilities are themselves augmented by AI.
| johnnyanmac wrote:
| It just needs to be cheaper than the average engineer,
| you mean.
| croes wrote:
| Doesn't sound like improving dramically.
| nerder92 wrote:
| I'm not sure, it's an observation considering how AI
| improvement is related to Moore's law.
|
| [1](https://techcrunch.com/2025/01/07/nvidia-ceo-says-his-ai-
| chi...)
| high_na_euv wrote:
| But some say that Moore Law is dead :)
|
| Anyway, the mumber of tiktok users coorelates with
| advancements in AI too!
|
| Before tiktok the progress was slower, then when tiktok
| appeared it progressed as hell!
| nerder92 wrote:
| Yes I see your point, correlation is not causation.
| Again, this is my best guess and observation based on my
| personal view of the world and my understanding of data
| on hands at t0 (today). This doesn't spare if from being
| incorrect or extremely wrong, as always when dealing with
| predictions of a future outcome.
| somenameforme wrote:
| That's an assumption. Most/all neural network based tech
| faces a similar problem of exponentially diminishing
| returns. You get from 0 to 80 in no time. A bit of effort
| and you eventually ramp it up to 85, and it really seems
| the goal is imminent. Yet suddenly each percent, and then
| each fraction of a percent starts requiring exponentially
| more work. And then you can even get really fun things like
| you double your training time and suddenly the resultant
| software starts scoring worse on your metrics, usually due
| to overfitting.
|
| And it seems, more or less, clear that the rate of change
| in the state of the art has already sharply decreased. So
| it's likely LLMs have already entered into this window.
| kykeonaut wrote:
| However, an increase in computing quality doesn't
| necessarily mean an increase in output quality, as you need
| compute power + data to train these models.
|
| Just increasing compute power will increase the
| performance/training speed of these models, but you also
| need to increase the quality of the data that you are
| training these models on.
|
| Maybe... the reason why these models show a high school
| level of understanding is because most of the data on the
| internet that these models have been trained on is of high
| school graduate quality.
| anothermathbozo wrote:
| No one has certainty here. It's an emergent technology and no
| one knows for certain how far it can be pushed.
|
| It's reasonable that people explore contingencies where the
| technology does improve to a point of driving changes in the
| labor market.
| croes wrote:
| That the second assumnoption is wrong is based on > intelligent
| companies will not layoff en masse to use AI
|
| How many companies are intelligent given how many dumb
| decisions we see?
|
| If we assume enough not so intelligent companies then better AI
| code we lead to mass firing.
| croes wrote:
| BTW your reasoning for 1 sound like previous reasoning for FSD.
|
| Assuming the same kind of growth in capabilities isn't backed
| by reality.
|
| The last release of OpenAI's model wasn't dramatically better.
|
| At the moment it's more about getting cheaper.
| blah2244 wrote:
| To be fair, his argument is valid for FSD! We have fully
| deployed FSD in multiple US cities now!
| ssimpson wrote:
| I tend to agree with you. The general pattern behind "x tool
| came along that made work easier" isn't to fire a bunch of
| folks, its to make the people that are there work whatever
| increment of ease of work more. ie, if the tool cuts work in
| half, you'd be expected to do 2x more work. Automation and
| tools almost never "makes our lives easier", it just removes
| some of the lower value added work. It would be nice to live
| better and work less, but our overlords won't let that happen.
| Same output with less work by the individual isn't as good as
| same or more output with the same or less people.
| fragmede wrote:
| > but our overlords won't let that happen
|
| If you have a job, working for a boss, you're trading your
| time for money. If you're a contractor and negotiate being
| paid by the project, you're being paid for results. Trading
| your time for money is the underlying contract. That's the
| fundamental nature of a job working for somebody else. You
| can escape that rat race if you want to.
|
| Someone I know builds websites for clients on a contract
| basis, and did so without LLMs. Within his market, he knows
| what a $X,000 website build entails. His clients were paying
| that rate for a website build out prior to AI-augmented
| programming, and it would take a week to do that job. With
| help from LLMs, that same job now takes half as much time. So
| now he can choose to take on more clients and take home more
| pay, or not, and be able to take it easy.
|
| So that option is out there, if you can make that leap. (I
| haven't)
| johnnyanmac wrote:
| >You can escape that rat race if you want to.
|
| I'm working on it. But it takes money and the overlords
| definitely are trying to squeeze as of late.
|
| And yes, while I don't think I'm being replaced in months
| or years, I can a possibility in a decade or two of the
| ladder being pulled up on most programming jobs. We'll
| either be treated as well as artists (assuming we still
| don't unionize) or we'll have to rely on our own abilities
| to generate value without corporate overlords.
| jayd16 wrote:
| These two predictions seem contradictory. If the AI massively
| improves why would they slow roll adoption?
| throwaway290 wrote:
| People want their AI stocks to go up. So they say things like
| sky is the limit and jobs are not going away (aka please
| don't regulate) in one sentence. I think only one of this is
| true.
| dkjaudyeqooe wrote:
| > Of course both these assumptions are wrong, the quality of
| code produced by AI will improve dramatically as model evolves.
|
| This is the fundamental delusion that is driving AI hype.
|
| Although scale has made LLMs look like magic, actual magic
| (AGI) is not on the scaling path. This is a conjecture (as is
| the converse), but I'm betting the farm on it personally and
| see LLMs as useful chat bots that augment other, better
| technologies for automation. If you want to pursue AGI, move on
| quickly to something structurally and fundamentally better.
|
| People don't understand that AGI is pure speculation. There is
| no rigorous, non-circular definition of human intelligence, let
| alone proof that AGI is possible or achievable in any
| reasonable time frame (like 100 years).
| aiono wrote:
| > the quality of code produced by AI will improve dramatically
| as model evolves.
|
| That's a very bold claim. We are already seeing plateu in LLM
| capabilities in general. And there is little improvement in
| places where they fall short (like making holistic changes in a
| large codebase) since their birth. They only improve where they
| are already good at such as writing small glue programs.
| Expecting significant breakthroughs with just scaling without
| any fundamentally changes to the architecture seems like too
| optimistic to me.
| y-c-o-m-b wrote:
| > The second assumption is also wrong, intelligent companies
| will not layoff en masse to use AI only, they will most likely
| slow hiring devs because their existing enhanced devs using AI
| will suffice enough to their coding related needs
|
| After 20 years in tech, I can't think of a single company I've
| worked for/with that would fit the profile of an "intelligent"
| company. All of them make poor and irrational decisions
| regularly. I think you over-estimate the intelligence of
| leadership whilst simultaneously under-estimating their greed
| and eventual ability to self-destruct.
|
| EDIT: you also over-estimate the desire for developers to
| increase their productivity with AI. I use AI to reduce
| complexity and give me more breathing room, not to increase my
| output.
| mulmboy wrote:
| > After 20 years in tech, I can't think of a single company
| I've worked for/with that would fit the profile of an
| "intelligent" company. All of them make poor and irrational
| decisions regularly. I think you over-estimate the
| intelligence of leadership whilst simultaneously under-
| estimating their greed and eventual ability to self-destruct.
|
| Says nothing about companies and everything about you
|
| > you also over-estimate the desire for developers to
| increase their productivity with AI. I use AI to reduce
| complexity and give me more breathing room, not to increase
| my output.
|
| I'm the same. But I expect that once many begin to do this,
| there will be some who do use it for productivity and they
| will set the bar. Then people like you and I will either use
| it for productivity or fall behind.
| johnnyanmac wrote:
| I'm happy you've only worked for altruistic, not-for-profit
| minded companies that care about employee growth and takes
| pride in their tach stack above all else. I have not had as
| fortunate an experience.
|
| >I expect that once many begin to do this, there will be
| some who do use it for productivity and they will set the
| bar.
|
| Yeah, probably. I've had companies so pinpointed on
| "velocoity" instead of quality. I imagine they will
| definitely try to expect triple the velocity just because
| one person "gets so much done". Not realizing how much of
| that illusion is correcting the submissions.
| mulmboy wrote:
| > I'm happy you've only worked for altruistic, not-for-
| profit minded companies that care about employee growth
| and takes pride in their tach stack above all else. I
| have not had as fortunate an experience.
|
| No one is making this claim.
|
| My comment was a bit terse and provocative, rude,
| deserves the downvotes tbh. I'll take them.
|
| To elaborate ~ I've got a lot of empathy for the poster I
| was originally replying to. I've fallen into that way of
| thinking before, and it sure is comfortable. Of course,
| companies and their leadership make poor and irrational
| decisions. Often, however, it's easy to perceive their
| decisions as poor and irrational when you simply don't
| have the context they do. "Why would they x ?? if only
| y!!" but, you know, there may well be a good reason why
| that you aren't aware of, they may have different goals
| to you (which may well be selfish! and that doesn't make
| them irrational or anything). Feels similar to
| programmers hating when people say "can't you 'just' x" -
| well yes, but actually there's a mountain of additional
| considerations behind the scene that the person spouting
| "just" hasn't considered.
|
| Is leadership unintelligent, or displaying
| poor/irrational decision making, if the company self
| destructs? Perhaps. But quite possibly not. They probably
| got a whole lot out of it. Different priorities.
|
| Consider that leadership may label a developer
| unintelligent if that dev doesn't always consider how to
| drive shareholder value "gee they're so focused on
| increasing their salary not on business value". Well
| actually the dev is quite smart, from their own
| perspective. Same thing.
|
| And if every company you've ever worked for truly has
| poor leadership then, yeah, it's probably worth
| reassessing how you interview. Do you need to dig deeper
| into the business? Do you just not have the market value
| to negotiate landing a job at a company with intelligent
| leadership?
|
| So, two broad perspectives: either the poster has a
| challenge with perception, or they are poor at picking
| companies. Or perhaps the companies truly do have poor
| leadership but I think that unlikely. Hence it comes back
| to the individual.
|
| @y-c-o-m-b sorry for being a bit rude.
|
| Cheers for reading
| leptons wrote:
| > their existing enhanced devs using AI will suffice enough to
| their coding related needs.
|
| Not my experience. I spend as much time reading through and
| replacing wrong AI generated code as I do writing my own code,
| so it's really wasting my time more often than helping. It's
| really hit or miss, and about the only thing the AI gets right
| most often is writing console.log statements based on the
| variable I've just assigned, and that isn't really "coding".
| And even then it gets it right only about 75% of the time.
| Sure, that saves me some time, but I'm not seeing the supposed
| acceleration AI is hyped as giving.
| reportgunner wrote:
| People really believe that companies are firing because AI will
| replace them ?
| varsketiz wrote:
| Frankly, I agree with the points in the article, yet I'm
| triggered slightly by the screaming dramatic writing like
| "destroy everything".
| WaitWaitWha wrote:
| "What has been will be again, what has been done will be done
| again; there is nothing new under the sun." - King Solomon
|
| Litter, palanquin, and sedan chair carriers were fired.
|
| Oarsmen were fired.
|
| Horses were fired.
|
| . . . [time] . . .
|
| COBOL programmers where fired.
|
| and, so on.
|
| What was the expectation; that programmers will be forever? Lest
| we forget, barely a century ago, programmers started to push out
| a large swath of non-programmers.
|
| The more important question is what roles will push out whatever
| roles AI/LLMs create.
| Mistletoe wrote:
| I was watching How It's Made last night and watching how
| pencils were made, thinking how hard this would be and how
| expensive a pencil would be if a person had to make them and
| how an endless supply of them can be made this way. Then I
| thought about how software has allowed us to automate so many
| things and scale and I realized AI is the final step where we
| can automate and remove the programmers. Pencil makers were
| replaced and so will programmers be replaced. Your best hope is
| being the person running the pencil machine.
| WaitWaitWha wrote:
| > Your best hope is being the person running the pencil
| machine.
|
| Or, see what is coming and hop on that cart for the short
| lives we have.
|
| Pencils. There is a hobby primitive survival, bushcraft, or
| primitive skills (youtuber John Plant in "Primitive
| Technology" is best example). I am certain we could come up
| with a path to create "pencils". We would just need to define
| what we agree to be a "pencil" first.
|
| Is a stick of graphite or even wood charcoal wrapped in
| sheepskin a "pencil". Would a hollowed juniper branch stuffed
| with the writing material a "pencil"?
| goosejuice wrote:
| An agent would be able to replace sales, marketing, customer
| success, middle management and project managers much better and
| earlier than any developer of a software company.
|
| Nocode and Shopify-like consolidation are/have been much bigger
| threats imo. These large orgs are just trimming fat that they
| would have trimmed anyways.
|
| But hell what do I know. Probably nothing :)
| newAccount2025 wrote:
| One major critique: why do we think junior programmers really
| learn best from the grizzled veterans? AI coaches can give
| feedback on what someone is actually seeing and doing, and can be
| available 24x7. I suspect this can enable the juniors of the
| future to have a much faster rise to mastery.
| hennell wrote:
| I remember being in maths class with a kid next to me who was a
| maths wiz. He could see what I was doing, was available to help
| almost the whole lesson, far easier for me to ask then the
| teacher who had many other students.
|
| In theory a much faster rise to mastery. In practice I rarely
| had to actually do the work because he'd help me if I got
| stuck, and what made sense when he explained it didn't stick
| because I wasn't really doing it.
|
| I did very badly in my first test that year, and was moved
| elsewhere.
| swiftcoder wrote:
| Every generation sees a new technology that old timers loudly
| worry "will destroy programming as a profession".
|
| I'm old enough to remember when that new and destructive
| technology was Java, and the greybeards were all heavily invested
| in inline assembly as an essential skill of the serious
| programmer.
|
| The exact same 3 steps in the article happened about a decade ago
| during the "javascript bootcamp" craze, and while the web stack
| does grow ever more deeply abstracted, things do seem to keep on
| trucking along...
| hedora wrote:
| I'm not old enough to remember these, but they were certainly
| more disruptive than AI has been so far (reverse chronological
| order):
|
| - The word processor
|
| - The assembly line
|
| - Trains
|
| - Internal combustion engines
|
| I do remember some false starts from the 90's:
|
| - Computer animation will put all the animation studios out of
| business
|
| - Software agents will replace all middlemen with computers
| nitwit005 wrote:
| Technically, we automated most programing when we got rid of
| punch cards and created assembly languages.
| mattfrommars wrote:
| I read this on Reddit, but it capture in essence where we are
| headed in the future.
|
| "AI won't replace you. Programmers using AI will."
| throwaway7783 wrote:
| It has its uses, but often fail at seemingly simple things.
|
| The other day,i couldn't get Claude to generate an HTML page,
| with a logo on the top left, no matter how I prompted.
| Workaccount2 wrote:
| Programmers are not going to go away. But the lavish salaries,
| benefits packages, and generous work/life balances probably will.
|
| I envision software engineering ending up in the same pit of
| mediocrity as all the other engineering disciplines.
| Pooge wrote:
| As a software engineer with about 4 years of experience, what can
| I do to avoid being left behind?
|
| The author mentions "systems programming" and "high-performance
| computing". Do you have any resources for that (whether it be
| books, videos, courses)?
| glouwbug wrote:
| High frequency trading. If you're talking something more
| hardware focused, try job searching for the exact term "C/C++".
| These jobs are typically standard library deprived (read:
| malloc, new, etc) and you'll be making calls to register sets,
| SPI and I2C lines. Embedded systems, really; think robotics,
| aviation, etc. If that's still too little hardware try finding
| something in silicon validation. Intel (of yesterday), AMD,
| nvidia, Broadcom, you'll be doing C to validate FPGA and ASIC
| spin ups. It's the perfect way to divorce yourself from
| conventional x86 desktops and learn SOC programming, which
| close loops itself back into fields like HFT where FPGA
| experience is _incredibly_ lucrative.
|
| But when anyone says systems programming, thinks hardware: how
| do I get that additional 15% performance on top of my
| conventional understanding of big O notation? Cache lines,
| cache levels, DMAs, branch prediction, the lot.
| insane_dreamer wrote:
| I think a more interesting question is what the impact will be on
| the next generation looking at CS/SWE as a potential profession.
| It's been considered a pretty "safe" profession for a long time
| now. That will change over the next 10 years. Will parents advise
| their kids to avoid CS because the job market will be so much
| smaller in 10 years' time?
| sumoboy wrote:
| I'm sure that's happening right now. On the flipside will
| companies who hire in 4 years look at those CS/SWE kids as
| lessor skilled devs because they relied so much on AI to pass
| classes and didn't really learn?
| hedora wrote:
| There was a similar effect during the dot com boom / crash.
|
| Everyone and their dog got a CS degree, and the average
| quality of that cohort was abysmal. However, it also created
| a huge supply of extremely talented people.
|
| The dot-com crash happened, and software development was
| "over forever", but the talented folks stuck around and are
| doing fine.
|
| People that wanted to go into CS still did. Some of them used
| stack overflow and google to pass their courses. They were as
| unemployable as the bottom of the barrel during the dot com
| boom.
|
| People realized there was a shortage of programmers, so CS
| got hot again for a bit. Now LLMs have hit and are disrupting
| most industries. This means that most industries need to
| rewrite their software. That'll create demand for now.
|
| Eventually, the LLM bust will come, programming will be "over
| forever" again, and the cycle will continue. At some point
| after Moore's law ends the boom and bust cycle will taper
| off. (As it has for most older engineering disciplines.)
| mdrzn wrote:
| "The New Generation of Drivers Will Be Useless Without a Horse"
| is what I read when I see articles like this.
| lawgimenez wrote:
| AI is cool, until they start going down the client's absurd
| requirements.
| dragonwriter wrote:
| My opinion: tech isn't firing programmers for AI. If is firing
| peogrammers because of the financial environment, and waving
| around AI as a fig leaf to pretend that it is not really cutting
| back in output.
|
| When the financial environment loosens again, there'll be a new
| wave of tech hiring (which is about equally likely to publicly be
| portrayed as either reversing the AI firing or exploiting new
| opportunities due to AI, neither of which will be the real
| fundamental driving force.)
| smitelli wrote:
| I've come to believe it really is this.
|
| Everybody got used to the way things worked when interest rates
| were near zero. Money was basically free, hiring was on a
| rampage, and everybody was willing to try reckless moonshots
| with slim chances for success. This went on for like fifteen
| years -- a good chunk of the workforce has only ever known that
| environment.
| coolKid721 wrote:
| most narratives for everything are just an excuse for macro
| stuff. we had zirp for basically the entire period of 2008 -
| 2022 and when that stopped there was huge lay offs and less
| hiring. I see lots of newer/younger devs being really
| pessimistic about the future of the industry, being mindful of
| the macro factors is important so people don't buy into the AI
| narratives (which is just to bump up their stocks).
|
| If people can get a safer return buying bonds they aren't going
| to invest in expansion and hiring. If there is basically no
| risk free rate of return you throw your money at hiring/new
| projects because you need to make a return. Lots of that goes
| into tech jobs.
| johnnyanmac wrote:
| No one 0ast sole very small businesses (aka a single person
| with contractors) is seriously trying to replace programmers
| with AI right now. I do feel we will hit that phase sometimes
| down the line (probably in the 30's).so I at least think this
| is a tale to keep in the back of our minds long term.
| arrowsmith wrote:
| Did ChatGPT write this article? The writing style reeks of it.
| mola wrote:
| I believe regardless of the validity for the AI can replace
| programmers now narrative, we will see big co squeezing the labor
| force and padding the companies bottom line and their pocket.
|
| The fact the narrative is false will be the problem of the one
| who replaces these CEOs, and us workers
| angusb wrote:
| Did anyone else read this as "Firing programmers for (AI will
| destroy everything)" or have I been reading too much Yudkowsky
| p0w3n3d wrote:
| Any sufficiently advanced technology is indistinguishable from
| magic ~ Arthur C. Clarke
|
| People who make decisions got bamboozled by the ads and marketing
| of AI companies. They failed to detect a lack of intelligence and
| got deceived, that they have magical golems for a fraction of the
| price but eventually, they will get caught with their pants down.
| jnet wrote:
| I find the people who promote AI the most are those with vested
| financial interests in AI. Don't get me wrong, I find it is a
| useful tool but it's not going to replace programmers any time
| soon.
| antirez wrote:
| You can substitute AI with "Some Javascript Framework" and the
| subtitles still apply very well. Yet nobody was particularly
| concerned about that.
| bdangubic wrote:
| hehehe yea but how many of us were hand-writing JavaScript for
| a living anyways :)
| fred_is_fred wrote:
| What I've seen is less "let's fire 1000 people and replace with
| AI" but more "let's not hire for 2 years and see what happens as
| AI develops".
| usrbinbash wrote:
| Clicked on the Website. Greeted by the message: "Something has
| gone terribly wrong". It did load correctly at the second
| attempt, but I have to admit...well played with raising the
| dramatic effect webserver, well played ;-)
| 999900000999 wrote:
| I have to disagree with this article. Companies as is,
| particularly larger companies have a lot of fluff. People who do
| about three or four hours of work a week, and effectively just
| sit around so senior management can claim they have so many
| people working on such and such project.
|
| With AI, you no longer need those employees to justify your high
| valuations. You don't need as many juniors. The party is over
| tell the rest of the crew. I wouldn't advise anyone to get into
| tech right now. I know personally my wages have been stagnant for
| about 5 years. I still make fantastic money, and it's
| significantly more than the average American income, but my hopes
| and dreams of hitting 300 or 400k total comp in retiring by 40
| are gone.
|
| Instead I've more or less accepted I'm going to have to keep
| working my middle class job, and I might not even retiring till
| my '50s! Tragic.
| pockmarked19 wrote:
| People who look forward to retiring are like people who look
| forward to heaven: missing out on life due to the belief their
| "real" life hasn't begun yet.
| 999900000999 wrote:
| I want to spend all day making music, and small games.
|
| I haven't figured out a way to do that in a manner that
| supports myself.
|
| Every job is ultimately filing out TPS reports. The reports
| might look a little different, but it's still a TPS report.
| renewiltord wrote:
| Amusingly, I spent years making multiples of your target
| comp and now I'm home sitting around using AI to make
| myself toy games.
|
| The barrier has dropped so low that I think I'd have been
| more productive if I were still working.
| 999900000999 wrote:
| Not like it's going to happen for me, but how did you
| reach such comp.
|
| I'm a simple man. If I hit 2 million in net worth I'm
| done working. I don't plan on having a family, so I'm
| just supporting myself.
|
| If I really made a ton of money I'd fund interesting open
| source games. Godot is the most popular open source game
| engine, and they're making it happen off just 40k a
| month.
|
| I'm a bit surprised Microsoft hasn't filled the void
| here. What's a few million dollars a year to get new
| programmers fully invested in a .net first game engine?
| renewiltord wrote:
| Worked in HFT. But tbh everyone I know in FAANG who stuck
| it out is doing even better.
| 999900000999 wrote:
| I've actually worked in finance for a bit, but I'm also
| content with where I'm at.
|
| I don't this have a realistic chance at HFT though.
| Doesn't stop me from applying and dreaming...
| weatherlite wrote:
| I think we look forward to financial independence more so
| than the retirement itself. Could be nice not having to worry
| some Chatbot or younger dude are going to replace me and I'll
| have to go work in McDonalds (not that there's anything wrong
| with that).
| iainctduncan wrote:
| I work in tech diligence. This means the companies I talk to
| _cannot lie or refuse to answer a question_ (at risk of deals
| failing through and being sued to obvilion). Which means we get
| the hear the _real_ effects of tech debt all time. I call it the
| silent killer. Tech debt paralyzes companies all the time, but
| nobody hears about it because there 's zero advantage to the
| companies in sharing that info. I'm constantly gobsmacked by how
| many companies are stuck on way past EOL libraries because of bad
| architecture decisions. Or can't deal with heinous noisy
| neighbour issues without spending through the nose on AWS because
| of bad architecture decisions. Or are spending the farm to
| rewrite part of the stack that can't perform well enough to land
| enterprise clients, but the rewrite is going to potentially
| bankrupt the company... because of bad architecture decisions.
| This shit happens ALL THE TIME. Even to very big companies!
|
| The tech debt situation is going to become so, so much worse. My
| guess is there will be a whole lot of "dead by year five"
| companies built on AI.
| inetknght wrote:
| > _I work in tech diligence. This means the companies I talk to
| cannot lie or refuse to answer a question_
|
| Nice. How do I get into that kind of position?
|
| > _Tech debt paralyzes companies all the time, but nobody hears
| about it because there 's zero advantage to the companies in
| sharing that info._
|
| If nobody hears about it, then how do you hear about it?
| Moreover, what makes you think it's _tech debt_ and not
| _whatever reason the business told you_? And further, if it 's
| _tech debt_ and _not_ whatever reason the business told you,
| then don 't you think the business _lied_? And didn 't you just
| say they're not allowed to lie?
|
| Can you clear that up?
| iainctduncan wrote:
| Sure I can clear it up.
|
| What happens is once they are into a potential deal, they go
| into exclusivity with the buyer, and we get brought in for a
| wack of interviews and going through their docs. Part of that
| period includes NDAs all around, and the agreement that they
| give us access to whatever we need (with sometimes some back
| and forth over IP). So could they lie? Technically yes, but
| as we ask to see things to demonstrate that what they said is
| true, and it would break the contract they've signed with the
| potential acquirer, that would be extremely risky. I have
| heard of cases where people did, it was discovered after the
| deal, and it retroactively cost the seller a giant chunk of
| cash (at risk of even more giant law suit). We typically have
| two days of interviews with them and we specifically talk
| about tech debt.
|
| Our job is to ask the right questions and ask to see the
| right things to get the goods. We get to look at code, Jira,
| roadmap docs, internal dev docs, test runner reports,
| monitoring and load testing dashboards, and so on. For
| example, if someone said something vague about responsivness,
| we'll look into it, ask to see the actual load metrics, ask
| how they test it and profile, and so on.
|
| I got into because I had been the CTO of a startup that went
| through an acquisition, knew someone in the field, didn't
| mind the variable workload of being a consultant, and have
| the (unusual) skill set: technical chops, leadership
| experience, interviewing and presenting skills, project
| management, and the ability to write high quality reports.
| Having now been in the position of hiring for this role, I
| can say that finding real devs who have all those traits is
| not easy!
| inetknght wrote:
| > _I can say that finding real devs who have all those
| traits is not easy!_
|
| Sounds like some very high bar to meet, that's for sure!
|
| > _We typically have two days of interviews with them and
| we specifically talk about tech debt._
|
| > _Our job is to ask the right questions and ask to see the
| right things to get the goods. We get to look at code,
| Jira, roadmap docs, internal dev docs, test runner reports,
| monitoring and load testing dashboards, and so on._
|
| Call me a skeptic but, given that scope, I have trouble
| believing that two days is sufficient to iron out what
| kinds of tech debt exist in an organization of any size
| that matters.
| iainctduncan wrote:
| Well, the two days are just for interviews. So we have a
| lot longer to go through things and we send over a big
| laundry list info request before hand. But you're right,
| it's never enought time to be able to say "we found all
| the debt". It's definitely enough time for us to find out
| _a lot_ about their debt, and this is always worth it to
| the acquirer (these are mid to late stage acquisitions,
| so typically over $100M).
|
| Also, you'd be surprised how much we can find out. We are
| talking directly to devs, and we're good at it. They are
| usually very relieved to be talking to real coders (e.g.,
| I'm doing a PhD in music with Scheme Lisp and am an open
| source author, most of our folks are ex CTOs are VP Engs)
| and the good dev leaders understand that this is their
| chance to get more resource allocation to address debt
| post-acquisition. The CEOs can often be hand wavy
| BS'sers, but the folks who have to run the day to dev
| process are usually happy to unload.
| cleandreams wrote:
| Sounds about right. The startup I worked for (acquired by a
| FANG) turned over the whole code base, for example.
| iainctduncan wrote:
| If I may ask, were you directly involved in the process?
| I'm writing a book based on my experiences and would love
| to hear more about FANG diligence differs. I can be
| reached at iain c t duncan @ email provider who is no
| longer not evil in case you are able and interested in
| chatting
| cess11 wrote:
| My work puts me in a similar position, but when they've gone
| bankrupt, and I see same thing. It's common to not invest in
| good enough developers early enough to manage to delete,
| refactor, upgrade and otherwise clean their software in time to
| be able to handle growth or stagnation on the business side.
|
| Once I saw a software that was built mostly by one person, in
| part because he did the groundwork by pushing straight to main
| with . as the only commit message and didn't document anything.
| When they ended up in my lap he had failed for six months to
| adapt their system to changes in the data sources they depended
| on.
|
| Sometimes the business people fucks up too, like using an
| excellent software system to do credit intensive trading
| without hedging for future interest raises.
|
| I'm not so sure machines will solve much on either side even
| though some celebrities say they're sure they will.
| iainctduncan wrote:
| That sounds really interesting. If you would be open to
| chatting sometime, I'd love to hear more. I'm writing a book
| about preparing companies for this, and can be reach at iain
| c t duncan @ email provider who is no longer not evil.
| m3kw9 wrote:
| It will destroy your own company, initially but if the AI is
| proven to do it better than humans, a lot of them will be
| converted into AI assistants to guide the AI you'd still need to
| know programming
| skeeter2020 wrote:
| How can this opion piece miss the big thing though? Even if it's
| an accurate prediction that will be someone else's problem. My
| last three companies have been Publicly traded, Private Equity,
| VC and PE. The timelines for decision makers in any of these
| scenarios maxes out around 4 years, and for some is less than a
| year. They're not shooting themselves in the foot, rather
| handicaping the business and moving on. The ones who time it
| right will have an extra-big payday, while the ones who do poorly
| will buy all these duds. Meanwhile the vast majority lose either
| way.
| armchairhacker wrote:
| Counterpoint: lots of software is relatively very simple at its
| core, so perhaps we don't need nearly as many employed developers
| as we have today. Alternatively, we have far more developers
| today, so perhaps companies are only firing to re-hire for lower
| salaries.
|
| Regarding the first hypothesis: For example, one person can make
| a basic social media site in a weekend. It'll be missing
| important things from big social medias: 1) features (some of
| them small but difficult, like live video), 2) scalability, 3)
| reliability and security, and 4) non-technical aspects
| (promotion, moderation, legal, etc.). But 1) is optional; 2) is
| reduced if you use a managed service like AWS and throw enough
| compute at it, then perhaps you only need a few sysadmins; 3) is
| reduced to essentials (e.g. backups) if you accept frequent
| outages and leaks (immoral but those things don't seem to impact
| revenue much); and 4) is neither reducible nor optional but
| doesn't require developers.
|
| I remember when the big tech companies of today were (at least
| advertised as) run by only a few developers. They were much
| smaller, but still global and handling $millions in revenue. Then
| they hired more developers, presumably to add more features and
| improving existing ones, to make profit and avoid being out-
| competed. And I do believe those developers made features and
| improvements to generate more revenue than their salaries and
| keep the companies above competition. But at this point, would
| _more_ developers generate even more features and improvements to
| offset their cost, and are they necessary to avoid competition?
| Moreover, if a company were to fire most of its developers,
| keeping just enough to maintain the existing systems, and direct
| resources elsewhere (e.g. marketing), would they make more profit
| and out-compete better?
|
| Related, everyone knows there's lots of products with needless
| complexity and "bullshit jobs". Exactly _how_ much of that
| complexity is needless and how many of those jobs are useless is
| up to debate, and it may be less than we think, but it may really
| not.
|
| I'm confident the LLMs that exist today can't replace developers,
| and I wouldn't be surprised if they don't "augment" developers so
| fewer developers + LLMs don't maintain the same productivity. But
| perhaps many programmers are being fired because many programmers
| just aren't necessary, and AI is just a placebo.
|
| Regarding the second hypothesis: At the same time, there are many
| more developers today than there were 10-20 years ago. Which
| means that even if most programmers _are_ necessary, companies
| may be firing them to re-hire later at lower salaries. Despite
| the long explanations above this may be the more likely outcome.
| Again, AI is just an excuse here, maybe not even an intentional
| one: companies fire developers because they _believe_ AI can
| improve things, it doesn 't, but then they're able to re-hire
| cheaper anyways.
|
| (Granted, even if one or both the above hypotheses are true, I
| don't think it's hopeless for software developers. Specifically
| because, I believe many developers will have to find other work,
| but it will be interesting work; perhaps even involving
| programming, just not the kind you learned in college, and at
| minimum involving reasoning some of which you learn from
| development. The reason being that, while both are important to
| some extent, I believe "smart work" is generally far more
| important than "hard work". Especially today, it seems most of
| society's problems aren't because we don't have enough resources,
| but 1) because we don't have the logistics to distribute them,
| and 2) because of problems that aren't caused by lack of
| resources, but mental health (culture disagreements,
| employer/employee disagreements, social media toxicity,
| loneliness). Especially 2). Similarly to how people moved from
| manual labor to technical work, I think people will move from
| technical work; but not back to manual labor, to something else,
| perhaps something social.)
| nyarlathotep_ wrote:
| > I'm confident the LLMs that exist today can't replace
| developers, and I wouldn't be surprised if they don't "augment"
| developers so fewer developers + LLMs don't maintain the same
| productivity. But perhaps many programmers are being fired
| because many programmers just aren't necessary, and AI is just
| a placebo.
|
| The last part is the important part.
|
| There's loads of jobs that don't "need" to exist in software
| gigs at many companies generally, ranging from lowly
| maintenance type CRUD jobs to highly complex work that has no
| path to profitability, but was financially justifiable a few
| years prior in a different financial environment.
|
| Examples: IIRC, Amazon had some game engine thing that had
| employed a bunch of graphics programmers (Lumberyard maybe?)
| that they scrapped (probably for cost reasons), Alexa has been
| a public loss leader and has had loads of layoffs. Google had
| their game streaming service that got shelved and other stuff I
| can't recall that they've surely abandoned in recent years,
| etc.
|
| Those roles were certainly highly skilled, but mgmt saw no path
| to profit or whatever, so they're gone.
|
| There's also the opposite in some cases. Many f500s are pissing
| away money to get some "AI" "Enabled" thing for their whatever
| and throwing money at companies like Accenture et al to get
| them some RAG chatbot thing.
|
| There's certainly a brief period where those opportunities will
| increase as every CTO wants to "modernize" and "leverage AI",
| although I can't imagine it lasting.
| siliconc0w wrote:
| It's already pretty hard to find engineers that can actually go
| deep on problems. I predict this will get even worst with AI.
| thro1 wrote:
| What about.. empowering programmers with AI - can it create any
| useful things ?
| clbrmbr wrote:
| > the real winners in all this: the programmers who saw the chaos
| coming and refused to play along. The ones who [...] went deep
| into systems programming, AI interpretability, or high-
| performance computing. These are the people who actually
| understand technology at a level no AI can replicate.
|
| Is there room for Interpretability outside of major ai labs?
| tomrod wrote:
| When an organization actively swaps out labor for capital,
| expecting deep savings and dramatic ROI, instead of incrementally
| improving processes, they deserve the failure coming. Change
| management and modernization are actually meaningful, despite the
| derision immature organizations show towards the processes.
| lenerdenator wrote:
| Keeping things around doesn't drive shareholder value. Firing
| employees making six figures does.
| scoutt wrote:
| What I see when producing code with AI (C/C++, Qt) is that often
| it gives output for different versions of a given library. It's
| like it can't understand (or doesn't know) that a given function
| is now obsolete and needs to use another method. Sometimes it can
| be corrected.
|
| I think there will be a point in which humans will no longer be
| motivated to produce enough material for the AI to update. Like,
| why would I write/shot a tutorial or ask/answer a question in a
| forum if people are now going directly to ask to some AI?
|
| And since AI is being fed with human knowledge at the moment, I
| think the quantity of good material out there (that was used so
| far for training) is going to slow down. So the AI will need to
| wait for some repos to be populated/updated to understand the
| changes. Or it will have to read the new documentation (if any),
| or understand the changes from code (if any).
|
| All this if it wasn't the AI to introduce the changes itself.
| gip wrote:
| I think that some engineers will still be needed to maintain old
| codebases for a while yes.
|
| But it's pretty clear that the codebases of tomorrow will be
| leaner and mostly implemented by AI, starting with apps (web,
| mobile,...). It will take more time for scaling backends.
|
| So my bet is that the need for software engineering will follow
| what happened for stock brokers. The ones with basic to average
| skills will disappear, automated away (it has already happened in
| some teams at my last job). Above average engineers will still be
| employed my comp will eventually go down. And there will be a
| small class of expert / smartest / most connected engineers will
| see their comp go up and up.
|
| It is not the future we want but I think it what is most likely
| to happen.
| msaspence wrote:
| What makes you think that AI is going to produce leaner
| codebases? They are trained on human codebases. They are going
| to end up emulating that human code. It's not hard to imagine
| some improvement here, but my gut is there just isn't enough
| good code out there to train a significant shift on this.
| gip wrote:
| Good question and I have no strong answer today. But I think
| we'll find a way to tune models to achieve this very soon.
|
| I see such a difference between what is built today and
| codebases from 10 years ago with indirections everywhere,
| unnecessary complexity,.. I interviewed for a company with a
| 13yo RoR codebase recently after a few mins looking at the
| code decided I didn't want to work there.
| jjallen wrote:
| I'll just say that AI for coding has been amazingly helpful for
| finding bugs and thinking things through and improving things
| like long functions and simplifying code.
|
| But it can absolutely not replace entire programmers at this
| point, and it's a long way of being able to say create, tweak,
| build and deploy entire apps.
|
| That said this could totally change in the next handful of years,
| and I think if someone worked just on creating a purely JS/React
| website at this point you could build something that does this.
| Or at least I think that I could build this. Where the user sort
| of talks to the AI and describes changes and they eventually get
| done. Or if not we are approaching that point.
| intrasight wrote:
| Firing programmers for using AI? I do see people asking on social
| media how to filter out hire candidates that are using AI in
| interviews.
|
| But if they had meant replacing programmers with AI (bad title),
| I'm much more concerned about replacing non-programmers with AI.
| It's gonna happen on a huge scale, and we don't yet have a
| regulatory regime to protect labor from capital.
| tharmas wrote:
| Programmers are training their replacement.
| meristohm wrote:
| Follow the money, all the way down to Mother Earth; which boats
| are lifted most, and at what cost?
| blibble wrote:
| ultimately Facebook and Google are completely unimportant, if
| they disappeared tomorrow the world would keep going
|
| however, I for one can't wait for unreliable garbage code in:
| - engine management systems - aircraft safety and
| navigation systems - trains and railway signalling
| systems - elevator control systems - operating
| systems - medical devices (pacemakers, drug dispensing
| devices, monitoring, radiography control, etc) - payment
| systems - stock exchanges
|
| maybe AI generated code is the Great Filter?
| rapind wrote:
| What irks me the most about AI is the black box mutability.
| Give it the same question of reasonable complexity and get a
| slightly different answer every time.
|
| I also dislike mass code generation tools. The code generation
| is basically just a cache of the AIs reasoning right? So it's
| sort of pre-optimization. Eventually, once cheap enough, I
| would assume the AI reasons in real time (producing temporary
| throw-away code for every request). But the mutability issue is
| still there. I think we need to be able to "lock-in" on the
| reasoning, but that's a challenge and probably falls apart with
| enough inputs / complexity.
| ArthurStacks wrote:
| Total delusion from the author. If tech companies need a human
| dev, therell be plenty of them across the globe jumping at the
| chance to do it for peanuts. Youre soon to be extinct. Deal with
| it.
| dwheeler wrote:
| Today AI can generate code. Sometimes it's even correct.
|
| AI is a useful _aid_ to software developers, but it requires
| developers to know what they 're doing. We need developers to
| know more, not less, so they can review AI-generated code, fix it
| when it's wrong, etc.
| giancarlostoro wrote:
| AI bros are like crypto bros. Really trying to hype it up beyond
| what its currently capable of and what it will be capable of in
| the near future.
|
| I have all sorts of people telling me I need to learn AI or I
| will lose my job and get left in the dust. AI is still a tool,
| not a worker.
| guccihat wrote:
| When the AI dust settles, I wonder who will be left standing
| among the groups of developers, testers, scrum masters, project
| leaders, department managers, compliance officers, and all the
| other roles in IT.
|
| It seems the general sentiment is that developers are in danger
| of being replaced entirely. I may be biased, but it seems not to
| be the most likely outcome in the long term. I can't imagine how
| such companies will be competitive against developers who replace
| their boss with an AI.
| __MatrixMan__ wrote:
| > I can't imagine how such companies will be competitive
| against developers who replace their boss with an AI.
|
| Me neither, but I think it'll be a gratifying fight to watch.
| phist_mcgee wrote:
| Please take the scrum masters first.
| cjoshi66 wrote:
| Knowing the difference between programmers able to generate AI
| code vs those who can actually explain it matters. If orgs can do
| that, then firing programmers should be fine. If they can't,
| things might get ugly for _some_ but not _all_.
| larve wrote:
| This is missing the fact that budding programmers will also
| embrace these technologies, in order to get stuff done and
| working and fulfill their curiosity. They will in fact grow up to
| be much more "AI native" than current more senior programmers,
| except that they are turbocharging their exploration and learning
| by having, well, a full team of AI programmers at their disposal.
|
| I see it like when I came of age in the 90ies, with my first
| laptop and linux, confronted with the older generation that grew
| up on punchcards or expensive shared systems. They were
| advocating for really taking time to write your program out on
| paper or architecting it up front, while I was of the "YOLOOOO,
| I'll hack on it until it compiles" persuasion. Did it keep me
| from learning the fundamentals, become a solid engineer? No. In
| fact, the "hack on it until it compiles" became a pillar of
| today's engineering: TDD, CI/CD, etc...
|
| It's up to us to find the right workflows for both mentoring /
| teaching and for solid engineering, with this new, imo paradigm-
| changing technology.
| larve wrote:
| Another aspect this is missing is that, if AI works well enough
| to fire people (it already does, IMO), there is a whole world
| of software that was previously out of reach to be built. I
| wouldn't build a custom photography app for a single
| individual, nor would I write a full POS system for 3 people
| bakery. The costs would be prohibitive. But if a single
| developer/product designer can now be up to the challenge, be
| it through a "tweak wordpress plugins until it works" or
| through more serious engineering, there is going to be a whole
| new industry of software jobs out there.
|
| I know that it works because the amount of softwrae I now write
| for friends, family or myself has exploded. I wouldn't spend 4
| weekends on a data cleanup app for my librarian friend. But I
| can now, in 2-3 h, get something really usable and pretty, and
| it's extremely rewarding.
| SkyBelow wrote:
| AI native like recent digital natives, who have more time using
| software but far less time exploring how it works and less
| overall success at using digital tools?
|
| AI reminds me of calculators. For someone who is proficient in
| math, they boost speed. For those learning math, it becomes a
| crutch and eventually stops their ability to learn further
| because their mind can't build upon principles fully outsourced
| to the machine.
| larve wrote:
| Yet calculators don't seem to have reduced the number of
| people in mathematics, engineering and other mathematics
| heavy fields. Why would it be any different with people using
| AI to learn coding?
| nritchie wrote:
| It's been noted that LLM's output quality decays as they ingest
| more LLM generated content in their training. Will the same
| happen for LLM generated code as more and more of the code on
| Github is generated by LLMs? What then?
| jdmoreira wrote:
| The approach has changed. Its all about test / inference time
| now and reinforcement learning on top of the base models. There
| is no end in sight anymore, training data won't be a limiting
| factor when reasoning and self-play are the approach
| Terretta wrote:
| making tech != using tech
| nu2ycombinator wrote:
| Dejavu, Complaints on using AI sounds very similar to early times
| of offshoring/outsourcing. At the end of the day, corporates go
| for most profitable solution, so AI is going to replace some
| percentage of headcount.
| usixk wrote:
| Agreed - AI agents are simply a form of outsourcing and companies
| that go all in on that premise will get bonked hard.
|
| Getting into cyber security might be a gold mine in spite of all
| the AI generated code that is going to be churned out in this
| transition period.
| Me000 wrote:
| Name one successful company that doesn't outsource developer
| labor outside America.
| regnull wrote:
| This reminds me of the outsourcing panic, many years ago. Hiring
| cheaper talent overseas seemed like a no brainer, so everyone's
| job was in danger. Of course, it turned out that it was not as
| simple, it came with its own costs, and somehow the whole thing
| just settled. I wonder if the same will happen here. In this line
| of work, it's almost as when you know exactly what you want to
| build, you are 90% there. AI helps you with the rest.
| sunami-ai wrote:
| I agree with the statement in the title.
|
| Using AI to write code does two things:
|
| 1. Everything seems to go faster at first until you have to debug
| it because the AI can't seem to be able to fix the issue... It's
| hard enough to debug code you wrote yourself. However, if you
| work with code written by others (team environment) then maybe
| you're used to this, but not being able to quickly debug code
| you're responsible for will shoot you in the foot.
|
| 2. You brain neurons in charge of code production will be
| naturally re-assigned for other cognitive tasks. It's not like
| riding a bicycle or swimming which once learned is never
| forgotten. It's more like advanced math, which if you don't
| practice you can forget.
|
| Short term gain; long term pain.
| dkjaudyeqooe wrote:
| Essentially: people are vastly overestimating AI ability and
| vastly underestimating HI ability.
|
| Humans are supremely adaptable. That's what our defining
| attribute as a species is. As a group we can adapt to more or
| less any reality we find ourselves in.
|
| People with good minds will use whatever tools they have to
| enhance their natural abilities.
|
| People with less good minds will use whatever tools they have
| to cover up their inability until they're found out.
| karaterobot wrote:
| > The result? We'll have a whole wave of programmers who are more
| like AI operators than real engineers.
|
| I was a developer for over a decade, and pretty much what I did
| day-to-day was plumb together existing front end libraries, and
| write a little bit of job-specific code that today's LLMs could
| certainly have helped me with, if they'd existed at the time. I
| agree that the really complicated stuff can't yet be done by AI,
| but how sure are we that that'll always be true? And the idea
| that a mediocre programmer can't debug code written by another
| entity is also false, I did it all the time. In any case, I don't
| resonate with the idea that the bottom 90% of programmers are
| doing important, novel, challenging programming that only a
| special genius can do. They paid us $180k a year to download NPM
| packages because they didn't have AI. Now they have AI, and the
| future is uncertain with respect to just how high programmers
| will be flying ten years from now.
| NicuCalcea wrote:
| Workers in various industries have pled for their jobs and
| programmers said "no, my code can replace you". Now that
| automation is coming for them, it's suddenly "the dumbest
| mistake".
|
| Tech has "disrupted" many industries, leaving some better, but
| many worse. Now that "disruption" is pointed inwards.
|
| Programmers will have to adapt to the new market conditions like
| everyone else. There will either be fewer jobs to go around (like
| what happened to assembly line workers), or they will have to
| switch to doing other tasks that are not as easy to replace yet
| (like bank tellers).
| j-krieger wrote:
| The real funny thing is that now we're replacers and the
| replaced at the same time. We plead to ourselves.
|
| The wings have begun melting and nothing will stop it. Finally,
| Icarus has flown too close to the sun.
| NicuCalcea wrote:
| Ultimately, it's executives and shareholders who make the
| decisions, and they will always be able to pay some
| programmers enough to replace the other ones. I don't think
| of developers as having a lot of professional or class
| solidarity.
| jopsen wrote:
| LLMs might enable us to build things we couldn't build before.
|
| It's just as plausible that the market will simply grow.
| skirge wrote:
| No accountant was fired when Microsoft Clippy was introduced. AI
| is nice for prototyping and code completion but that's all.
| atoav wrote:
| Meanwhile the newest model is like "Oh just run this snippet"
|
| And the snipped will absolutely ruin your corp if you run it.
| arscan wrote:
| What are these fired programmers going to do? Disappear? They'll
| build stuff, using the same plus-up AI tooling that enabled their
| old boss to fire them. So guess what, their old boss just traded
| employees for competitors. Congrats, I guess?
|
| Zuck declaring that he plans on dropping programmer head-count
| substantially, to me indicates that they'll have a much smaller
| technological moat in the future, and they won't be paying off
| programmers to not build competing products anymore. I'm not sure
| he should be excited about that.
| Aperocky wrote:
| What moat did Meta have today?
|
| I'd say there is a moat, but it's not on the tech side.
|
| Tiktok flew right through the moat, and only a small part of
| that is about tech.
|
| A lot of development on AI is exciting and meta is a big part
| of that, but there isn't any real moat there either.
| arminiusreturns wrote:
| I fear we are about to see refreshed patent trolling
| conflicts regarding software designs, as it's all in the IP
| these days, which is the "moat" you are looking for.
| arscan wrote:
| Presumably it is going to be easier for increasingly smaller
| and smaller teams to make highly polished, scalable and
| stable products that will be appealing and addictive and
| resonate more with their users than whatever Meta can come up
| with. I suspect that there will be many, many more viable
| shots taken at Meta's incumbent positions than have been
| taken historically because development costs associated with
| doing so will simply be so much lower. Meta used to need
| thousands of talented and expensive software engineers. They
| are saying they don't anymore. Well, that means their
| competitors don't either, which lowers the bar for
| competition.
|
| I get that it wasn't just the vast army of talented engineers
| they kept on staff that formed a moat. But it certainly
| helped, otherwise they wouldn't have paid so much to have
| them on staff.
|
| Point taken though, Meta has a lot more going for it than a
| simple technological advantage.
| booleandilemma wrote:
| I'm hoping that developers who have entered management positions
| will be able to talk their fellow managers out of this. I can
| understand if some non-technical MBA bozo doesn't understand, but
| former developers must see through the hype.
| osnium123 wrote:
| If you are a junior engineer who just graduated, what would you
| do to ensure that you learn the needed skills and not be overly
| reliant on AI?
| fragmede wrote:
| I wouldn't, because AI isn't the problem. With machines being
| able to run deepseek locally, the problem to look out for isn't
| the possibility that the web service will go down and you have
| to live without it, it's that, as their capabilities currently
| stand, they can't fix or do everything.
|
| I learned to program some time before AI became big, and back
| when I was an intern, I'd get stuck in a rut trying to debug
| some issue. When I got stuck in the rut, it would be tempting
| to give up and just change variables and "if" statements
| blindly, just hoping it would somehow magically fix things.
| Much like I see newer programmers get stuck when the LLM gets
| stuck.
|
| But see, that's where you earn your high paying SWE salary. For
| doing something that other people can not. So my advice to Jr
| programmers isn't to avoid using LLMs, it's to use them
| liberally until you find something or somewhere they're bad at,
| and look at _that_ as the true challenge. With LLMs,
| programming easy shit got easy. If you 're not running into
| problems with the LLM, switch it up and try a different
| language with more esoteric libraries and trickier bugs.
| jcon321 wrote:
| Well before AI, us old guys had to google our own issues and
| copy/paste from stackoverflow. When I was a junior I never
| thought about being overly reliant on "google or
| stackoverflow", but LLMs are slightly different. I guess it
| would be like if I googled something and always trusted the
| first result. Maybe for your question it means not
| copying/pasting immediately what the LLM gives you and have it
| explain. Wasting a few mins on asking the LLM to explain, for
| the sake of learning, still beats the amount of time I used to
| waste scanning google results.
| ambyra wrote:
| It's a meme at this point. People who don't know anything about
| programming (lex fridman): "It's revolutionary. People with no
| programming skills can create entire apps." People who program
| for a living: "It can reproduce variants of code that have been
| implemented already 1000s of times, and... nothing else".
| stpedgwdgfhgdd wrote:
| Yesterday, using Aider and an openai model, forgot which one it
| picks by default; i asked it to check some Go code for
| consistency. It made a few changes, some ok, but also some that
| just did not compile. (The model did not understand the local
| scoping of vars in an if-then clause)
|
| It is just not reliable enough for mainstream Enterprise
| development. Nice for a new snake game....
| karxxm wrote:
| Replacing juniors with AI is stupid because who will be the next
| senior? AI won't learn anything while performing inference only.
| 827a wrote:
| > Imagine a company that fires its software engineers, replaces
| them with AI-generated code, and then sits back, expecting
| everything to just work. This is like firing your entire fire
| department because you installed more smoke detectors. It's fine
| until the first real fire happens.
|
| I feel like this analogy really doesn't capture the situation,
| because it implies that it would take some event to make
| companies realize they made a mistake. The reality right now is:
| You'd notice it instantly. Product velocity would drop to zero.
| Who is prompting the AI?
|
| The AI-is-replacing-programmers debate is honestly kinda tired,
| on both sides. Its just not happening. It _might_ be happening in
| the same way that pirated movies "steal" income from hollywood:
| maybe companies are expanding more slowly, because we're ramping
| up per-capita productivity because engineers are learning how to
| leverage it to enhance their own output (and its getting better
| and better). But, that's how every major tool and abstraction
| works. If we still had to write in assembly there'd be 30x the
| number of engineers out there than there are.
|
| There's no mystical point where AI will get good enough to
| replace engineers, not because it won't continue getting better,
| but because the economic pie is continually growing, and as the
| AI Nexus Himself, Marc Andreesen, has said several times:
| Humanity has an infinite demand for code. If you can make
| engineers 10x more efficient, what will happen in most companies
| is: we don't want to cut engineering costs by N% and stagnate, we
| want 10x more code and growth. Maybe we hire fewer engineers
| going forward.
|
| > But with the AI craze, companies aren't investing in junior
| developers. Why train people when you can have a model spit out
| boilerplate?
|
| This is not happening. Its fun, pithy reasoning that Good and
| Righteous White Knight Software Engineers can prescribe onto the
| Evil and Bad HR and Business Leadership people, but its just not,
| in any meaningful or broad sense, a narrative that you hear while
| hiring.
|
| The reason why juniors are struggling to find work right now is
| literally just because the industry is in a down cycle. During
| down cycles, companies are going to prioritize stability, and
| seniority is stability. That's it.
|
| When the market recovers, and as AI gets better and more
| prolific, I think there's a reality where juniors are actually a
| great ROI for companies, thanks to AI. They've been using it
| their whole career. They're cheaper. AI might be a productivity
| multiplier for all engineers; but it will _definitely_ be a
| productivity _normalizer_ for juniors; using it to check for
| mistakes, learn about libraries and frameworks faster, its such a
| great upleveling tool for juniors.
| drusha wrote:
| We are already seeing how the speed of development plays a more
| important role than the quality of the software product (for
| example, the use of Electron in the most popular software).
| Software will become shittier but people will continue to use it.
| So LLM's will become just another abstraction level in modern
| programming, like JS frameworks. No company will regret firing
| real programmers because LLMs will be cheaper and end users don't
| care about performance.
| Animats wrote:
| Front-end development should have been automated by now. After
| all, it once was. Viamall. Dreamweaver. Mozilla Composer. Wix.
| Humans should not be writing HTML/CSS.
| sangnoir wrote:
| > The ones who didn't take FAANG jobs but instead went deep into
| systems programming, AI interpretability, or high-performance
| computing
|
| I appreciate a good FAANG hatefest, but what the gosh-darn heck
| is this? Does the author seriously think _all_ FAANG engineers
| only transform and sling gRPC all day? Or they they blindly
| stumbled into being hyperscalers?
|
| The author should randomly pick a mailing list on any if those
| topics (systems programming, AI interpretability, HPC) and count
| the number of emails from FAANG domains
| zoogeny wrote:
| I'm not sure why people are so sure one way or the other. I mean,
| we're going to find out. Why pretend you have a crystal ball and
| can see the future?
|
| A lot of articles like this just _want_ to believe something is
| true and so they create an elaborate argument as to why that
| thing is true.
|
| You can wrap yourself up in rationalizations all you want. There
| is a chance firing all the programmers will work. Evidence beats
| argument. In 5 years we'll look back and know.
|
| It is actually probably a good idea to hedge your bets either
| way. Use this moment to trim some fat, force your existing
| programmers to work in a slightly leaner environment. It doesn't
| feel nice to be a programmer cut in such an environment but I can
| see why companies might be using this opportunity.
| uh_uh wrote:
| This. Articles like this are examples of motivated reasoning
| and seem to be coming from a place of insecurity by programmers
| who feel their careers threatened.
| just-another-se wrote:
| Though I disagree to most of things said here, I do agree that
| the new fleet of software engineers won't be that technically
| adapt. But what if they don't need to? Like how most programmers
| today don't need to know the machine level instructions to build
| something useful.
|
| I feel there will be a paradgym shift about what programming
| would be altogether. I think, programmers will more be like
| artists, painters who would conceptualize an idea and communicate
| those ideas to AI to implement (not end to end though; in bits
| and pieces, we'd still need engineers to fit these bits and
| peices together - think of a new programming language but instead
| of syntax, there will be natural language prompting).
|
| I've tried to pen down this exact thoughts here:
| https://suyogdahal.com.np/posts/engineering-hacking-and-ai/
| czhu12 wrote:
| I find it interesting that the same community who has questioned
| for a long time about why companies like Facebook need 60000
| engineers, "20 good engineers is all you need", is now rallying
| against any cuts at all.
|
| AI makes engineers slightly more efficient, so there's a slightly
| less of a need for as many. That's assuming AI is the true cause
| of any of these layoffs at all
| localghost3000 wrote:
| Reminder to everyone reading FUD like this that tech bro's are
| trying _very_ hard to convince everyone that these technologies
| are Fundamentally Disruptive and The Most Important Advancement
| Since The Steam Engine. When in fact they are somewhat useful
| content generation machines who's output needs to be carefully
| vetted by the very programmers this article claims will be out of
| a job.
|
| To be clear: I am not saying this article is written in bad faith
| and I agree that if its assertions come to pass that what it
| predicts would happen. I am just urging everyone to stop letting
| the Sam Altmans of the world tell you how disruptive this tech
| is. Tech is out of ideas and desperate to keep the money machine
| printing.
| CrimsonRain wrote:
| Most programmers are not worth anything. Firing them for ai or no
| reason at all, will not change anything.
|
| Whether ai can do stuff comparable to a competent senior sde,
| remains to be seen. But current AI definitely feels like a super
| assistant when I'm doing something.
|
| Anyone who says chatgpt/Claude/copilot etc are bad, is suffering
| a skill issue. I'd go as far as to say they are really bad at
| working with junior engineers. Really bad as teachers too.
| cromulent wrote:
| LLMs are good at producing plausible statements and responses
| that radiate awareness, consideration, balance, and at least
| superficial knowledge of the technology in question. Even if they
| are non-committal, indecisive, or even inaccurate.
|
| In other words, they are very economical replacements for middle
| managers. Have at it.
| pipeline_peak wrote:
| AI will only raise the bar for the expected outcome of future
| programmers. It's just automated pair programming really.
|
| The argument: "The New Generation of Programmers Will Be Less
| Prepared" is too cynical. Most of us aren't writing algorithms
| anyway, programmers may, but not Software Engineers which I
| really think the author is referring to.
|
| Core libraries make it so SWE's don't have to write linked lists.
| Did that make our generation "less prepared" or give us the
| opportunity to focus our time on what really matters, like
| delivering products.
| aurizon wrote:
| I have an image of 10,000 monkeys + typewriter + time =
| Shakespeare... Of course, these typed pages would engender a
| paper shortage. So the same 10,000 LLM's will create a similar
| amount of 'monkeyware' - I can see monkey testers roaming through
| this chaff for useable gems to be incorporated into a structure
| operated by humans (our current coder base) to engineer into
| complex packages?. Will this employ the human crews and allow a
| greater level of productivity? Look at Win11 = a huge mass full
| of flaws/errors (found daily). In general increased productivity
| has worked to increase GDP - will this continue? or will we be
| over run by smarter monkeys?
| penjelly wrote:
| I've switched sides on this issue. I do think LLMs will reduce
| headcount across tech. Smaller teams will take on more features
| and less code will be written by hand. It'll be easier to run a
| startup, freelance or experiment with projects.
| elif wrote:
| You don't fire developers and replace them with AI. This trope is
| often repeated and causing people to miss the actual picture of
| what's going on.
|
| You use AI to disrupt a market, and that market forces the
| startup employing the devs to go bankrupt.
|
| It's not a "this quarter we made a decision" thing.
|
| It's a thing that's happening right now all over the place and
| snowballing.
| tiku wrote:
| Doesn't matter, a few minutes after firing the last programmer
| SkyNet wil become operational.
| tippytippytango wrote:
| Senior devs have a decade long reinforcement learning loop with
| the marketplace. That will be a massive advantage until they
| start RL on agents against the same environment.
| EternalFury wrote:
| I have been doing this for 30 years now. The software industry is
| all about selling variations of the same stuff over and over and
| over. But in the end, the more software there is out there, the
| more software is needed. AI might take it over and handle it all,
| but at some point, it would be cruel to make humans do it.
| zombiwoof wrote:
| Tech debt is hard enough when written by the person sitting next
| to you or no longer at the company
|
| It will be impossible to maintain when it's churned out by
| endless AI
|
| I can't imagine being a manager tasked with "our banking system
| lost 30 million dollars can you find the bug" when the code was
| written by AI and some intern maintains it
|
| I'll be watching with popcorn
| esalman wrote:
| I agree. Unfortunately Tesla cannot be held accountable for
| autopilot crash and OpenAI cannot be held accountable for bugs
| caused by Copilot code. But that's where we're (forced to be)
| headed as a society.
| zombiwoof wrote:
| Cursor being a 1 billion dollar company is just ridiculous
| norseboar wrote:
| Is there actually an epidemic of firing programmers for AI? Based
| on the companies/people I know, I wouldn't have thought so.
|
| I've heard of many companies encouraging their engineers to use
| LLM-backed tools like Cursor or just Copilot, a (small!) number
| that have made these kinds of tools mandatory (what "mandatory"
| means is unclear), and many companies laying people off because
| money is tight.
|
| But I haven't heard of _anybody_ who was laid off b /c the other
| engineers were so much more productive w/ AI that they decided to
| downsize the team, let alone replace a team entirely.
|
| Is this just my bubble? Mostly Bay Area companies, mostly in the
| small-to-mid range w/ a couple FAANG.
| groos wrote:
| A good analogy from not-so-distant past is the outsourcing wave
| that swept the tech industry. Shareholders everywhere were
| salivating at the money they would make by hiring programmers in
| India at 1/10 the cost while keeping their profits. Those of us
| who have been in the industry a while all saw how that went. I
| think this new wave wave will go roughly similarly. Eventually,
| companies will realize that to keep their edge, they need humans
| with creativity while "AI" will be one more tool in the bag. In
| the meantime, a lot of hurt will happen due to greed.
| delichon wrote:
| I've been programming full time with an LLM in an IDE for the
| last two weeks, and it's a game changer for me at the end of my
| career. I'm in future shock. I suddenly can barely live without
| it.
|
| But I'm not fearful for my job, yet. It's amazingly better, and
| much worse than a junior dev. There are certain instructions,
| however simple, that just do not penetrate. It gets certain
| things right 98% of the time, which make me stop looking for the
| other 2% of the time where it absolutely sabotages the code with
| breaking changes. It is utterly without hesitation in defeating
| the entire purpose of the application in order to simplify a line
| of code. And yet, it can do so much simple stuff so fast and
| well, and it can be so informative about ridiculously obscure
| critical details.
|
| I have most of those faults too, just fewer enough to be worth my
| paycheck for a few more AI generations.
| QuantumGood wrote:
| "utterly without hesitation in defeating the entire purpose".
| So many examples, ever-more detailed prompts attempted as a
| solution. The more I try, the more "AI" seems to be only
| workable as "experienced prompt engineer with an AI stack".
| klik99 wrote:
| Honestly show junior programmers a little more respect. It's such
| an old person thing to say they're going to all become prompt
| engineers or similar. Why does the old always look at the young
| and claim they're all pulled by the tides of the zeitgeist and
| not thinking human beings who have their own opinions about
| stuff. Many smart people have a contrarian streak and won't just
| dive into AI tools wholesale. Honestly a lot of the comments here
| are at the level of critique as those facebook memes of crowd of
| people with iphones for faces.
|
| Most people have ALWAYS taken the easy road and don't become the
| best programmers. AI is just the latest tool for lazier people or
| people who tend towards laziness. We will continue to have new
| good programmers, and the number of good programmers will
| continue to be not enough. None of that is not caused by AI. I'm
| far from an AI advocate, but it will, someday, make the most
| boring parts of programming less tedious and be able to put
| "glue" kind of code in non-professional hands.
| pinoy420 wrote:
| What it's not good at is anything within the last 2 years (due to
| training cutoff) all fail at latest remix, svelte and selenium
| syntax for example.
|
| This is an eternity in FE dev terms
| jarsin wrote:
| There's also the issue of a company doesn't own the copyright to
| a codebase generated primarily through prompts.
|
| So anyone can copy it and reproduce it anywhere. Get paid to
| prompt ai by a company. Take all code home with you. Then when
| your tired of them use same code to undercut them.
| gitgud wrote:
| > _The next generation of programmers will grow up expecting AI
| to do the hard parts for them._
|
| This is the opposite of what I've seen. AI does the easy parts,
| only the hard parts are left...
| aaroninsf wrote:
| I'd like to suggest caution wrt a sentiment in this thread, that
| actual phase change in the industry i.e. readers here losing
| their jobs and job prospectcs, "doesn't feel right around the
| corner."
|
| In specific, most of you are familiar with human cognitive error
| when reasoning with non-linearities.
|
| I'm going to assert and would cheerfully put money behind the
| prospect that this is exactly one of the domains within which
| nonlinear behavior is most evident and hence our intuitions are
| most wrong.
|
| Could be a year, could be a few, but we're going to hit the 80%
| case being covered _just find thank you_ by run of the mill
| automated tools, and then press on into asymptote.
| madrox wrote:
| I think anyone who is afraid of AI destroying our field just has
| to look at the history of DevOps. That was a massive shift in
| systems engineering and how production is maintained. It changed
| how people did their jobs, required people to learn new skills,
| and leaders to change how they thought about the business.
|
| AI is going to change a lot about software, but AI code tools are
| coming for SWEs the way Kubernetes came for DevOps. AI completely
| replacing the job function is unsubstantiated.
| fragmede wrote:
| If the job of SWEs after AI is to edit yaml, json, yaml
| templates, and yaml-templates-in-yaml-templates all day long,
| while waiting for ArgoCD, I quit.
___________________________________________________________________
(page generated 2025-02-11 23:01 UTC)