[HN Gopher] Thoughts on the Future of Software Development
___________________________________________________________________
Thoughts on the Future of Software Development
Author : rkwz
Score : 168 points
Date : 2024-03-18 10:29 UTC (12 hours ago)
(HTM) web link (www.sheshbabu.com)
(TXT) w3m dump (www.sheshbabu.com)
| chrisjj wrote:
| > the main argument against automating these tasks was that
| machines can't think creatively. Now this argument gets weaker by
| the day.
|
| Citation needed.
| dartos wrote:
| I mean it's definitely a weaker argument than 2 years ago.
|
| AI may very well plateau, but it might not.
| couchand wrote:
| > I mean it's definitely a weaker argument than 2 years ago.
|
| Is it? I see no evidence that machines are any closer to
| "thinking creatively" than they ever have been. We certainly
| have been developing our capacity for computation to a great
| extent, but it's not at all evident that statistical methods
| and creativity are the same thing.
| throwaway_08932 wrote:
| A practical definition of "creativity" is "can create
| interesting things." It's pretty clear that machines have
| become more "creative" in that sense over the last few
| years.
| boxed wrote:
| I have yet to see ChatGPT or something similar ask a
| followup to clarify the question. They just give you a
| "solution". That's equivalent of a super bad junior dev
| that will cause more trouble than the amount they will
| solve.
|
| That being said, I think we could make such a system. It
| just has to have training data that is competent...
| simonw wrote:
| Tell them to ask you follow up questions and they will.
|
| Some systems built on top of LLMs have this built in -
| Perplexity searches for example usually ask a follow up
| before running the search. I find it a bit annoying
| because it feels like about half the time the follow up
| it asks me isn't necessary to answer my original
| question.
| Workaccount2 wrote:
| I have had chatGPT suggest that I give it more
| data/information pretty regularly. Although not
| technically a question, it essentially accomplishes the
| same thing. "If you give me this" vs. "Can you give me
| this?"
| theshackleford wrote:
| > I have yet to see ChatGPT or something similar ask a
| followup to clarify the question.
|
| I don't use it for dev, but other things and I get
| chatgpt asking me follow up questions multiple times a
| day.
| imiric wrote:
| This point is often brought up in threads about AI, and I
| don't think it's accurate.
|
| The thing is that statistical models only need to be fed
| large amounts of data for them to exhibit what humans would
| refer to as "creativity", or "thinking" for that matter.
| The process a human uses to express their creativity is
| also based on training and the input of other humans, with
| the difference that it's spread out over years instead of
| hours, and is a much more organic process.
|
| AI can easily fake creativity by making its output
| indistinguishable from its training data, and as long as
| it's accurate and useful, it would hold immense value to
| humanity. This has been improving drastically in the last
| few years, so arguing that they're not _really_ thinking
| creatively doesn't hold much water.
| pegasus wrote:
| > is it?
|
| It is. And I bet most people would agree with GP. Most
| people (including engineers building these systems) have
| experienced surprise with some of the outputs of these
| models. Is there anything better to gauge creativity by
| than perceived surprise?
| JohnFen wrote:
| > Is there anything better to gauge creativity by than
| perceived surprise?
|
| I think there has to be, since such surprise can be
| generated through purely random mechanisms, and I don't
| think anyone would call a purely random mechanism
| "creative".
| brabel wrote:
| If it were purely random it would generate rubbish.
| JohnFen wrote:
| Not necessarily, but even in the cases where that's true,
| there will be the occasional result that surprises (in a
| good way).
| dartos wrote:
| I can ask my computer to write a backstory for my dnd
| character, give it a few details, and it makes one.
|
| Sometimes it adds an extra detail or two even!
|
| A few years ago that was almost unthinkable. The best we
| had was arithmetic over abstract language concepts (KING -
| MAN = QUEEN)
|
| We don't have a solid definition of "creativity" so the
| goalpost can move around a lot, but the idea that a machine
| can not create new prose, for example, is just not true
| anymore.
|
| That's not the same as creativity, sure, but it definitely
| weakens the "computers can't be creative" argument.
| Hoasi wrote:
| Not really. Large language models' output is creative only
| insofar as you prompt them to mix data. It allows you to
| create combinations that nobody has seen before. On its own,
| an LLM is incapable of producing anything creative.
| Hallucinating due to a lack of data is the closer it comes to
| autonomous creativity. Happy accidents are an unreliable
| creativity source.
| ks2048 wrote:
| One of the ironies of current generative AIs and LLMs - they
| are creative, but need human supervision to watch for simple
| logical errors. Just the reverse of conventional way of looking
| at humans vs machines.
| RALaBarge wrote:
| To me, I think the end game for any code developing automation
| isn't that it needs to be creative to program, but..
|
| A: needs to have a close to complete understanding of the
| problem to solve and the tools they have in languages to
| complete this in the most efficient way and..
|
| B: need to be able to iterate through all of the different
| options it can generate and run `perf` or what have you on the
| outcomes of each and present the top few winners
|
| AGI might not be a thing we can make, but if we can make enough
| semi-intelligent AIs communicating via a standardized format
| that are each 95% right on their respective domain we might be
| at a level that is good enough. Add in for critical domains
| some sort of polling that is done between a few admin AIs to
| make sure the winning results are not a hallucination.
| samsquire wrote:
| Large language models could be used to configure abstract syntax
| trees for desired behaviour, since much of devops and software
| architecture is slotting configuration together.
|
| Kubernetes, linkerd, envoy, istio, cloudformation, terraform,
| cdk.
|
| We're just configuring schedules and binpacking computation and
| data movement into a calendar.
|
| The semantic space of understanding behaviour is different to
| understanding syntax.
| dartos wrote:
| That's the easiest part of the job, yes. That's why there are
| so many tutorials on getting that stuff set up
| samsquire wrote:
| Unfortunately it's unmaintainable and cannot be transformed
| without full time DevOps or SREs.
| lnxg33k1 wrote:
| I don't do much on internet, i don't have facebook nor other
| various socials, just HN, or read simple docs, or code. Otherwise
| I'm AFK. Now since October Ive visited 4 sites that were not
| simple reading stuff, that I had to interact or do various level
| of meaningful stuff, and all 4 had issues and were expecting me
| to call support etc. I hope in the future of software development
| there are more automated tests and less hiring managers
| netman21 wrote:
| Now combine this approach with concepts of no-code, which takes
| the task of writing lines of code out of the path from concept to
| production. An AI could easily create instructions that could be
| fed to bubble to create an app. Either bubble is already working
| on this or someone else is and bubble will miss the boat.
| osigurdson wrote:
| I think the expanded pie argument is probably valid to an extent.
| If costs go down, demand goes up. Also, I suspect new entrants to
| the field will start to slow as it won't be as lucrative as it
| once was. This is exactly what happened after outsourcing
| threatened wages in the 00's - followed by insanely high
| compensation in the 10's.
| lonelyasacloud wrote:
| The expanded pie argument only works when there is something
| that cannot be done - or more cost effectively done - by the
| machine.
|
| The argument in the article is that that something for SW
| developers is increasingly going to be to feed the machine with
| requirements.
|
| I have no idea how this will pan out, but from where I am
| sitting the ability to converse and understand business users,
| even if they stay human ones, better than current sw devs do
| doesn't feel like much of a moat, especially with the machine's
| evolving io modalities.
| vouaobrasil wrote:
| I've been using computers since I was about 12...long time ago.
| What I've come to the conclusion is this: the best programs and
| the best tools are the ones that are lovingly (and perhaps with a
| bit of hate too) crafted by their developers. The best software
| ecosystems are the ones that are developed slowly and with care.
|
| AI coding seems all wrong compared to that world. In fact, the
| only reason why there is a push for AI coding is because the
| software world of the ol' days has been co-opted by the
| evolutionary forces of consumerism and capitalism into a
| pathology, a nightmare that exists only to push us with dark
| patterns into using things and buying things that we don't really
| need. It takes the joy out of software development by placing two
| gods above all else that is mortal and good: efficiency and
| profit.
|
| AI seems antithetical to the hacker ethic, not because it's not
| an intriguing project, but because within it seems to be the deep
| nature of automating away much of the joy of life. And yes,
| people can still use AI to be creative in some ways such as with
| AI prompts for art, but even so, the overall societal effect is
| the eroding of the bespoke and the custom and the personal from a
| human endeavor whose roots were once tinkering with machines and
| making them work for us, and whose final result is now US working
| for THEM.
| IggleSniggle wrote:
| On the one hand, I agree with you. On the other hand, you could
| make similar arguments for the typewriter, the printing press,
| or the wheel.
| vouaobrasil wrote:
| Nope, there is a fundamental difference that people who point
| out this analogy ALWAYS fail to acknowledge: AI is a
| mechanism to replace people at work that is largely
| considered creative. Yes, AI may not be truly creative, but
| it does replace people doing jobs, and those people feel they
| are doing something creative.
|
| The typewriter, the printing press, the wheel, never did
| anything of the sort.
|
| And, you're also ignoring speed and scale: AI develops much
| faster and changes the world much faster than those
| inventions did.
|
| Your argument is akin to arguing that driving at 200km/h is
| safe, simply because 20km/h is safe. The wheel was much safer
| because it changed the world at 20km/h. AI is like 1000km/h.
| IggleSniggle wrote:
| I absolutely disagree regarding handwriting, and maybe also
| on maintaining your own typewriter. The _task_ of producing
| the written document, and I don 't just mean the thoughts
| conveyed, but _each stroke of each letter_ , was a creative
| act that many enjoyed. Increasingly, there is only a very
| small subset of enthusiasts that are "serious" about
| writing by hand. Most "normal" people don't see the value
| in it, because if you're getting the idea across, who
| cares? But I'd wager if you talked to a monk who had spent
| their life slaving away in a too dark room making
| reproductions of books OR writing new accounts, and showed
| them the printing press, they would lament that the human
| joy of putting those thoughts to paper was in and of itself
| approaching the divine, and an important aspect of what
| makes us human.
|
| Of course I don't think you need to go that far back; the
| main thing that differentiates pre and post printing press
| is that post printing press, the emphasis is increasingly
| more on the value of the idea, and less on the act of
| putting it down.
| vouaobrasil wrote:
| I agree with you, it's true. I guess I should have been
| more precise in saying that AI takes away a much greater
| proportion of creative work. But of course, horse
| driving, handwriting, and other such things still
| involved a level of creativity in them, which is why in
| turn I am against most technology, especially when its
| use is unrestricted and unmoderated.
| IggleSniggle wrote:
| I'm highly sympathetic to your perspective, but it would
| be hypocritical of me to entirely embrace it. Hitting the
| spacebar just gives me so much joy, the syncopated
| negative space of it that you don't get writing by hand,
| the power of typing "top" and getting a birdseye view of
| your system, that I can't really begrudge the next
| generation of computing enthusiasts getting that same joy
| of "simply typing an idea" and getting back a coherent
| informed response.
|
| I personally lament the loss of the experience of using a
| computer that gives the same precision that you'd expect
| from a calculator, but if I'm being honest, that's been
| slowly degenerating even without the addition of AI.
| Oioioioiio wrote:
| The first iPhone came out 2007. 17 years and less what it
| took a modern and connected society to just solve mobile
| communication.
|
| This includes development of displays, chips, production,
| software (ios, android), apps etc.
|
| AI is building upon this speed and only has software and
| specialized hardware and the AI we are currently building
| is already optimizing itself (copilot etc.).
|
| And the output is not something 'new' which changes a few
| things like navigation, post service, banking but
| basically/potentially everything we do (including the
| physical world with robots).
|
| If this is any indication, its very realistic to assume
| that the next 5-15 years will be very very interesting.
| xandrius wrote:
| This feels like people getting spooked by autocomplete in
| editors.
|
| We're pretty far from AI being able to properly,
| efficiently and effectively develop a full system that I
| believe before that will take my job, I'd probably be
| retired or something. If my feeling is wrong, I'm still
| sure some form of developer will be needed, even just to
| keep the AI running
| iteygib wrote:
| Those are tools humans use to created output directly to
| speed up a process. The equivalent argument for AI would be
| if the typewriter wrote you a novel based on what you asked
| it to write, and then everyone else's typewriter might create
| the same/similar novel if it's averaging all of the same
| human data input. This leads to a cultural inbreeding of
| sorts since the data that went into it was curated to begin
| with.
|
| The real defining thing to remember is that humans don't need
| AI, but AI needs human data.
| IggleSniggle wrote:
| Humans also need human data. You might be better than I,
| but at least for myself, I know that I am just a weighted
| pattern matcher with a some stochasticity mixed in.
|
| I don't think the idea of painstakingly writing out a book,
| and then having a printing press propagate your book so
| that all can easily reproduce the idea in their own mind,
| is so very different.
|
| I think this is why the real conversation here is about the
| lossiness of the data, where the "data" is conveying a
| fundamental idea. Put another way, human creativity is
| iterative, and the reason we accept "innovative" ideas is
| that we have a shared understanding of a body of work, a
| canon, and the real innovation is taking the canon and
| mixing it up with one new innovation.
|
| I'm not even arguing that AI is net good or bad for
| humanity. Just that it really isn't so different than the
| printing press. And like the Bible was to the printing
| press, I think the dominant AI model will greatly shape
| human output for a very long time, as the new "canon" in an
| otherwise splintered society, for good and for bad.
|
| Proprietary models, with funding and existing reach (like
| the Catholic Church when the Gutenberg press came along),
| will dominate the mental space. We already have Martin
| Luther's nailing creeds to the door of that church, though.
|
| Still, writing by hand does still have special meaning,
| encoding additional information that is not conveyed by
| printing press. But then as now, that additional meaning is
| mostly only accessible to those closest to you, that have
| more shared experiences with you.
|
| I'll accept that there's an additional distinction, though,
| since layers of communication will be imported and applied
| without understanding of their context; ideas replaced,
| filled in, rather than stripped. But let's be honest: every
| interpretation of a text was already distinct and uniquely
| an individual's own, albeit likely similar to those that
| shared an in-group.
|
| AI upsets the balance between producers and consumers, but
| not in the way that it's easier for more people to be
| producers, but in this day in age, that there is so little
| time left to be a consumer when everyone you know can be
| such a prolific producer.
|
| Edit: typewriters and printing presses also need human data
| JohnFen wrote:
| > Just that it really isn't so different than the
| printing press.
|
| The part that makes the goals of the AI crowd an entirely
| different beast from things like the printing press is
| that the printing press doesn't think for anyone. It just
| lets people reproduce their own thoughts more widely.
| IggleSniggle wrote:
| The printing press lets people reproduce other people's
| thoughts more widely. As to reproducing your own thoughts
| more widely, this is why I was describing a cultural
| "canon" as being the foundation upon which new ideas can
| be built. In the AI world, the "new" idea is effectively
| just the prompt (and iterative direction); everything
| else is a remix of the canon. But pre-AI, in order for
| anyone to understand your new idea, you had to mix it
| into the existing canon as well.
|
| Edit: to be abundantly clear, I'm not exactly hoping AI
| can do very well. It seems like it's going to excel at
| automating the parts of software development that I
| legitimately enjoy. I think that's also true for other
| creator-class jobs that it threatens.
| squigz wrote:
| > AI seems antithetical to the hacker ethic
|
| Don't make the mistake I do and think HN is populated
| predominantly by that type of hacker. This is, at the end of
| the day, a board for startups.
|
| (Not to say none frequent this board, but they seem relatively
| rare these days)
| throwaway_08932 wrote:
| The "I did this cool thing" posts get way more upvotes than
| the "let's be startuppy" posts. I don't think the "hacker"
| population is as rare as you're suggesting.
| squigz wrote:
| I don't think "shiny new thing" posts getting more upvotes
| indicate anything about the hacker population.
| spacebuffer wrote:
| I think he's refering to posts like this[0] rather than
| the shiny new tool people usually get excited about.
|
| [0]: https://news.ycombinator.com/item?id=30803589
| throwaway_08932 wrote:
| Correct.
| alatriste wrote:
| I remember when people used to say similar things about using
| ASM, and then about the craft of writing things in C instead of
| managed languages like Java.
|
| At the end of the day most people will only care about how the
| tool is solving the problem and how cheaply. A cheap, slow,
| dirty solution today tends to win over a expensive, quick,
| elegant one next year.
|
| Now there are still some people writing ASM, and a lot of them
| as a hobby. Maybe in a few years writing code from scratch will
| be seen in the same way, something very few people have to do
| in restricted situations, or as a pastime.
| evrimoztamur wrote:
| > A cheap, slow, dirty solution today tends to win over a
| expensive, quick, elegant one next year.
|
| I disagree with this platitude, one reason being the sheer
| scale of the hidden infrastructure we rely on. Just looking
| databases alone (Postgres, SQLite, Redis etc.) shows us that
| reliable and performant solutions dominate over others. Many
| other examples in other fields like operating system,
| protocol implementations, cryptography.
|
| It might be that you disagree on the basis of what you see in
| day-to-day B2B and B2C business cases where just solving the
| problem gets you paid, but then your statements should
| reflect that too.
| dt3ft wrote:
| Writing code by typing on a keyboard will be just a hobby?
|
| Sure, and who is supposed to understand the code written by
| AI when we retire? Since writing code by typing on a keyboard
| will apparently cease to exist, who will write prompts for an
| AI and put the code together?
|
| Person: Hey AI, build me a website that does a, b and c.
|
| AI: Here you go.
|
| Person: Looks like magic to me, what do I do with all this
| text?
|
| AI: Push it to Git and deploy it to a web server.
|
| Person: What is a web server? What is a Git?
|
| AI: ... let me google that for you.
|
| Yeah, I'm just not seeing it play down as in the conversation
| above.
| margorczynski wrote:
| > Sure, and who is supposed to understand the code written
| by AI when we retire?
|
| Why someone would need to? Do the product/business people
| who order creating something understand how it is done and
| what is Git, a webserver etc.? It is based on trust and if
| you can show the AI system can consistently achieve at
| least humanlike quality and speed on almost any development
| task then there is no need to have a technical person in
| the loop.
| guappa wrote:
| So there could never be a new provider or a new protocol
| because AI wouldn't be able to use them or create them.
|
| You can just make websites on pre-approved list.
| margorczynski wrote:
| > So there could never be a new provider or a new
| protocol because AI wouldn't be able to use them or
| create them
|
| On what do you base this? Is there some upper bound to
| the potential in AI reasoning that bounds it skill to
| creating anything more complex? I think it is on the
| contrary - it is humans who are bound by our biological
| and evolutionary hard limits, the machine is not.
| FreeFull wrote:
| Presumably, the AI would have access to just do all the git
| and web server stuff for you.. The bigger problem I see
| would be if the AI just refuses to give you what you ask
| for.
|
| Person: I want A
|
| AI: Here's B
|
| Person: No, I wanted A
|
| AI: I'm sorry. Let me correct that... Here's B
|
| .. ad nauseum.
|
| Or alternatively:
|
| Person: <reasonable request>
|
| AI: I'm sorry, I can't do that
| Oioioioiio wrote:
| Have you seen devin the ai developer demo?
|
| Business already doesn't know what security is, they will
| jump head first into everything which allows them to get
| rid of those weird developer dudes which they have to cater
| to and give a lot of money.
|
| I personally would also assume that there might be a new
| programming language AI will invent. Something faster, more
| optimized for AI.
| int_19h wrote:
| Don't worry, it'll spin up a git repo and an instance for
| you, as well.
|
| How stable and secure all of this will be though is another
| question. A rhetorical one.
| kypro wrote:
| > What I've come to the conclusion is this: the best programs
| and the best tools are the ones that are lovingly (and perhaps
| with a bit of hate too) crafted by their developers.
|
| I think this is an example of correlation but not causation.
| Obviously it's true to some extent in the sense that all things
| being equal more care is good, but I think all you're probably
| saying here is that good products are built well and products
| that are built well tend to be built by developers that care
| enough to make the right design decisions.
|
| I don't think there's any reason AI couldn't make as good (or
| better) technical decisions than a human developer that's both
| technically knowledgable and who cares. I think I personally
| care a lot about the products I work on, but I'm far from
| infallible. I often look back at decision I've made and realise
| I could have done better. I could imagine how an AI with
| knowledge of every product on Github, a large collection of
| technical architecture documentation and blog posts could make
| better decisions than me.
|
| I suppose there's also some creativity involved in making the
| "right" decisions too. Sometimes products have unique
| challenges which have no proven solutions that are considered
| more "correct" than any other. Developers in this cases need to
| come up with their own creative solutions and rank them on
| their unique metrics. Could an AI do this? Again, I think so.
| At least LLMs today seem able to come up with solutions to
| novel problems, even if they're not always great at this moment
| in time. Perhaps there are limits to the creativity of current
| LLMs though and for certain problems which require deep
| creativity humans will always outperform the best models. But
| even this is probably only true if LLM architecture doesn't
| advance - assuming creativity is a limitation in the first
| place, which I'm far from convinced of.
| raytopia wrote:
| Are there other sites that focus more on hacker ethic projects?
| Oioioioiio wrote:
| My dream is to get a farm and start doing a lot of things which
| already exist and explore them by myself.
|
| Doing pottery, art, music etc.
|
| I do call myself a hacker and what i do for a living is very
| aligned with one of my biggest hobbies "computer" but that
| doesn't mean shit to me tbh.
|
| I will leverage AI to write the things i wanna write but know
| that i don't have the time for doing those 'bigger' projects.
|
| Btw. very few people actually write good software. For most its
| a job. I'm really good in what i do because there are not that
| many of us out there in a normal company.
| samatman wrote:
| > _AI seems antithetical to the hacker ethic_
|
| I disagree, chatbots are arguably best for hacks: dodgy
| kludged-up software that works if you don't sneeze on it, but
| accomplishes whatever random thing I wanted to get done.
| They're a great tool for hackers.
|
| A bunch of clueless managers are going to fall for the hype and
| try and lean on the jank machine to solve problems which aren't
| suitable to a quick hack, and will live to regret it. Ok, who
| am I kidding, most of them are going to fail up. But a bunch of
| still-employed hackers will curse their name, while cashing the
| fat paycheck they're earning for cleaning up the mess.
| osigurdson wrote:
| I honestly wouldn't mind if the type of software where you talk
| to business users and implement their random thoughts to automate
| some banal business process goes away. It generally seems low
| value anyway.
|
| We should still have software products however.
| kovezd wrote:
| Folks see it wrong. Is not about human vs machine software
| development.
|
| Is about allocating the brightest humans to the most productive
| activities.
| paulryanrogers wrote:
| Keep in mind there are many more humans that aren't as bright.
| If there are fewer and fewer ways for them to contribute then
| they'll find other ways to get what they need.
| kovezd wrote:
| And as long as robots are where they are. There a lot of jobs
| in hospitality, and healthcare available. In average, life
| quality will improve.
| samatman wrote:
| A reasonable, albeit lossy, gloss on "bright" is "someone who
| earns their living through intellectual labor". Clearly there
| are mathematicians who become potato farmers, but what we
| don't see is movement in the other direction: call it
| sufficient but not necessary.
|
| The corollary being that those people are already earning
| their bread doing things which an artificial intelligence
| can't replicate. I've seen many vast overestimates of how
| many of these jobs could even be replaced with a Jetsons
| robot, and we're nowhere close to deploying anything vaguely
| resembling one of those.
| smallmancontrov wrote:
| Making sure those productive minds sell ads rather than do
| science. Value creation!
| kovezd wrote:
| Yes. The distribution of information is indeed a complex
| problem, as graph theory shows us.
|
| Regarding science is about one standard deviation further
| than software development in complexity. It's not necessarily
| a trade off. On the contrary, software applies scientific
| breakthroughs.
| smallmancontrov wrote:
| My problem isn't the graph or even its particular failings
| (vanishing gradients in science being one of many), it's
| the ideological opposition to regularization.
| clktmr wrote:
| As long as there is no AGI, no software engineer needs to be
| worried about their job. And when there is, obviously everything
| in every field will change and this discussion will soon be
| futile.
| Toutouxc wrote:
| This has been my cope mantra so far. I don't mind if my job
| changes a lot (and ideally loses the part I dislike the most --
| writing the actual code), and if I find myself in a position
| where my entire skillset doesn't matter at all, then well a LOT
| of people are in trouble.
| tomashubelbauer wrote:
| I have seen programmers express that they dislike writing
| code before and I wonder what the ratio of people who dislike
| it and people who like it, is. For me, writing code is one of
| the most enjoyable aspects of programming.
| Vinnl wrote:
| The worst future is where there still are plenty of jobs, but
| all of them consist of talking to an AI and hoping you use
| the right words that gets them to do what you need it to.
| weweweoo wrote:
| Not really. As long as there is no universal basic income,
| any job with decent salary beats unemployment. The job may
| suck, but the money allows you to do fun stuff after work.
| kossTKR wrote:
| If you dislike writing code were you pushed into this field
| by family, education or because of money?
|
| Because not liking code and being a dev is absolutely bizarre
| to me.
|
| One of the most amazing things about being able to "develop"
| in my view is exactly in those rare moments where you just
| code away, time flies, you fix things, iterate, organise your
| project completely in the zone - just like when i design,
| paint or play music, do sports uninterrupted, it's that flow
| state.
|
| In principle i like the social aspects but often they are the
| shitty part because of business politics, hierarchy games or
| bureaucracy.
|
| What part of the job do you like then?
| Toutouxc wrote:
| I enjoy the part where I'm putting the solution together in
| my head, working out the algorithms and the architecture,
| communicating with the client or the rest of the team,
| gaining understanding.
|
| I do not enjoy the next part, where I have to type out
| words and weird symbols in non-human languages, deal with
| possibly broken tooling and having to remember if the
| method is called "include" or "includes" in this language,
| or whether the lambda syntax is () => {} or -> () {}. I can
| do this second part just fine, but it's definitely not what
| I enjoy about being a developer.
| kossTKR wrote:
| Interesting, i also like the "scheming" phase, but also
| very much the optimisation phase.
|
| I completely agree that tooling, dependencies and syntax
| / framework github issue labyrinths have become too much
| and GPT-4 already alleviates some of that but i wonder if
| the scheming phase will get eaten too very soon from just
| a few sentences of business proposal - who knows.
| pydry wrote:
| Market consolidation (Microsoft/Google/Amazon) might cause a
| jobpocalypse, just as it did for the jobs of well paid auto
| workers in the 1950s (GM/Chrysler/Ford).
|
| GM/Chrysler/Ford didn't have to be _better_ than the startup
| competition they just had to be mediocre + be able to use their
| market power (vertical integration) to squash it like a bug.
|
| The tech industry is headed in that direction as computing
| platforms all consolidate under the control of an ever smaller
| number of companies (android/iphone + aws/azure/gcloud).
|
| I feel certain that the mass media will scapegoat AGI if that
| happens, because AGI will still be around and doing stuff on
| those platforms, but the job cuts will be more realistically
| triggered by the owners of those platforms going "ok, our
| market position is rock solid now, we can REALLY go to town on
| 'entitled' tech workers".
| geodel wrote:
| Seems about right to me. Hyper-standardization around few
| architecture patterns using
| Kubernetes/Kafka/Microservice/GraphQL/React/OTelemetry etc
| can roughly cover 95-99% of all typical software development
| when you add a cloud DB.
|
| Now I know there are ton of different flavors in each of
| these tech but they will be mostly distraction for employers.
| With heavy layer of abstraction of above pattern and SLAs by
| vendors as you say Microsoft/Google/Amazon etc employers will
| be least bothered vast variety of software products.
| quonn wrote:
| The technologies you mentioned are merely the framework in
| which the work is done. 25 years ago none of that was even
| needed to create software. Now they are needed to manage
| the complexity of the stack, but the actual content is the
| same as it used to be.
| dgb23 wrote:
| > this discussion will soon be futile
|
| Yes we could simply ask the AGI what to do anyways. I hope it's
| friendly.
| adrianN wrote:
| You don't have to completely replace people with machines to
| destroy jobs. It suffices if you make people more effective so
| that fewer employees are needed.
| MrBuddyCasino wrote:
| If software developers become more effective, demand will
| also rise, as they become profitable in areas where
| previously they weren't. The question then becomes which of
| those two effects outpaces the other, which is an open
| question.
| 63 wrote:
| The number of people/businesses that could use custom
| software if it were cheaper/easier to develop is nearly
| infinite. If software developers get more productive, demand
| will increase
| Workaccount2 wrote:
| Or lower the bar of successfully doing such work so that the
| field opens up to many more workers.
|
| Many software devs will likley have job security in the
| future, however those $180k salaries are probably much less
| secure.
| zeroonetwothree wrote:
| Just like when IDEs made programmers more effective so that
| fewer were needed. Oh wait, the opposite happened.
| fullstackchris wrote:
| and does even AGI change the bigger picture? we have 26.3
| million AGIs currently working in this space [1]. I've never
| seen a single one take all the work of the others away...
|
| [1] https://www.griddynamics.com/blog/number-software-
| developers....
| zeroonetwothree wrote:
| What do you think the "A" stands for?
| pizzafeelsright wrote:
| I would argue future engineers should be worried a bit. We no
| longer need to hire new developers.
|
| I was not trained professionally yet I'm writing production
| code that's passing code reviews in languages I never used. I
| will create a prompt, validate it compiles, passes tests, have
| it explain so I understand it was written as expected and write
| documentation about the code, write the PR, and I am seen as a
| competent contributor. I can't pass leet code level 1 yet here
| I am being invited to speak to developers.
|
| Velocity goes up and cost of features will drop. This is good.
| I'm seeing at least 10 to 1 output from a year ago based upon
| integrating these new tools.
| supriyo-biswas wrote:
| To be fair, Leetcode was never a good indicator of developer
| skills, though primarily because of the time pressure and the
| restrictive format that dings you for asking questions about
| the problem.
| politician wrote:
| Speaking of Leetcode... is anyone selling a service to
| boost Leetcode scores using AI yet? It seems like that's
| fairly low hanging fruit at this point.
| ioblomov wrote:
| Based on their demos, HackerRank is doing this as part of
| their existing products. Which makes sense since prompt
| engineering will soon become a minimum requirement for
| devs of any experience level.
| visarga wrote:
| Yeah, it sounds to me your teammates are going to pick up the
| tab at the end, when subtle errors will be 10x harder to
| repair, or you are working on toy projects where correctness
| doesn't really matter.
| nyrikki wrote:
| To add to this.
|
| I was going through devin's 'pass' diffs from SWE bench.
|
| Every one I ended up tracing to actual issues caused
| changes that would reduce maintainablity or introduced
| potential side effects.
|
| I think it may be useful as a suggestion in a red-green-
| refactor model, but will end up producing hard to maintain
| and modify code.
|
| Note this one here that introduced circular dependencies,
| changed a function that only accepted points to one that
| appears to accept any geometric object but only added
| lines.
|
| Domain knowledge and writing maintainable code is beyond
| generative transformers.
|
| https://github.com/CognitionAI/devin-swebench-
| results/blob/m...
|
| You simply can't get past what Godel and Rice proved with
| current technology.
|
| It is like when visual languages were supposed to replace
| programmers. Code isn't really the issue, the details are.
| ekidd wrote:
| Thank you for reading the diffs and reporting on them.
|
| And to be fair, lots of humans are already at least this
| bad at writing code. And lots of companies are happy with
| garbage code so long as it addresses an immediate
| business requirement.
|
| So Devin wouldn't have to advance much to be competitive
| in certain simple situations where people don't care
| about anything that happens more than 2 quarters into the
| future.
|
| I also agree that producing good code which meets real
| business needs is a hard problem. In fact, any AI which
| can truly do the work of a good senior software engineer
| can probably learn to do a lot of other human jobs as
| well.
| nyrikki wrote:
| Architectural erosion is an ongoing problem for humans,
| but they don't produce tightly coupled low cohesion code
| by default at the SWE level the majority of the time.
|
| With this quality of changes it won't be long until
| violations stack up to where further changes will be
| beyond any algorithms ability to unravel.
|
| While lots of companies do only look out in the short
| term, human programers are incentivized to protect
| themselves from pain if they aren't forced into
| unrealistic delivery times.
|
| At&t wireless being destroyed as a company due to a
| failed SAP migration that was largely due to fragile code
| is a good example.
|
| But I guess if the developer jobs that will go away are
| from companies that want to underperform in the market
| due to errors and a code base that can't adapt to
| changing market realities, that may happen.
|
| But I would fire any non intern programmer if they
| constantly did things like removing deprecation comments
| and introduced circular dependencies with the majority of
| their commits.
|
| https://github.com/CognitionAI/devin-swebench-
| results/blob/m...
|
| PAC learning is powerful but is still probably
| approximately correct.
|
| Until these tools can avoid the most basic bad practices
| I don't see any company sticking to them in the long
| term, but it will probably be a very expensive experiment
| for many of them.
| falcor84 wrote:
| Can't we just RLHF code reviews?
| nyrikki wrote:
| RLHF works on problems that are difficult to specify yet
| easy to judge.
|
| While RLHF will help improve systems, code correctness is
| not easy to judge outside of the simplest cases.
|
| Note how on OpenAI's technical report, they admit
| performance on college level tests is almost exclusively
| from pre-training. If you look at LSAT as an example, all
| those questions were probably in the corpus.
|
| https://arxiv.org/abs/2303.08774
| falcor84 wrote:
| >RLHF works on problems that are difficult to specify yet
| easy to judge.
|
| But that's the thing, that it seems that everyone here on
| HN (and elsewhere) finds it easy to judge the flaws of
| AI-generated code, and they seem relatively consistent.
| So if we start offering these critiques as RLHF at scale,
| we should be able to bring the LLM output to the level
| where further feedback is hard (or at least
| inconsistent), right?
| barrell wrote:
| Agreed. I use LLMs quite extensively and the amount of
| production code I ship from an LLM is next to zero.
|
| I even wrote a majority of my codebase in Python despite
| not knowing Python precisely because I would get the best
| recommendations from LLMs. As a frontend developer, with no
| experience in backend engineering in the last decade, and
| no Python experience, building an app where almost every
| function has gone through an LLM at some point, for almost
| 8 months -- I would be extremely surprised if some of the
| code it generated landed in production.
| csomar wrote:
| Most software is already as bad as this, though. And
| managers won't care (maybe even shouldn't?) if the
| execution fairly delivers.
|
| Think of this as Facebook page vs. WordPress website vs. A
| full custom website. The best option is to have a full
| custom website. Next, is a cheaper option from someone who
| can put a few lines together. The worst option is a
| Facebook page that you can create yourself.
|
| But the Facebook page also does the job. And for some
| businesses, it's fairly enough.
| acedTrex wrote:
| I have yet to see either copilot or gpt4 generate code that I
| would come close to accepting in a PR from one of my devs, so
| I struggle to imagine what kind of domain you are in that the
| code it generates actually makes it through review.
| ipaddr wrote:
| Honestly that sounds like a problem with the way you are
| managing prs. The PRs are too big or you are overly
| nitpicking prs on unimportant things
| jackling wrote:
| What's your domain?
| jayd16 wrote:
| I wonder if the reviewers are just using GPT as well.
| lainga wrote:
| Who's "we"?
| doktrin wrote:
| This doesn't vibe with my experience at all. We also use LLMs
| and it's exceedingly rare that a non-trivial PR/MR gets waved
| through without comment.
| robotnikman wrote:
| I have accepted using these tools to help when it comes to
| generating code and improving my output. However when it
| comes to dealing with more niche areas (in my case retail
| technology) it falls short.
|
| You still need that domain knowledge of whatever you are
| writing code for or integrating with, especially is the
| technology is more niche, or documentation was never made
| available publicly and scraped by the AI
|
| But when it comes to writing boilerplate code it is great, or
| when working with very commonly used frameworks (like front
| end javascript frameworks in my case)
| pphysch wrote:
| > passes tests
|
| Okay, so you are just kicking the can down the road to the
| test engineers. Now your org needs to spend more resources on
| test engineering to really make sure the AI code doesn't fuzz
| your system to death.
|
| If you squint, using a language compiler is analogous to
| writing tests for generated code. You are really writing a
| spec and having something automatically generate the actual
| code that implements the spec.
| l3mure wrote:
| Post some example PRs.
| packetlost wrote:
| > I'm writing production code that's passing code reviews in
| languages I never used
|
| Your coworkers likely aren't doing a very good job at
| reviewing, but also I don't blame them. The only way to be
| sure code works is to use it for its intended task. Brains
| are bad interpreters, and LLMs are extremely good bullshit
| generators. If the code makes it to prod and works, good. But
| honestly, if you aren't just pushing DB records around or
| slinging HTML, I doubt it'll be good enough to get you very
| far without taking down prod.
| kaba0 wrote:
| Meanwhile I'm paid for editing a single line of code in 2
| weeks, and nothing less than singularity will replace me.
|
| But sure, call me back when AI will actually reason about
| possible race conditions, instead of spewing out the
| definition of one it got from wikipedia.
| zeroonetwothree wrote:
| I still think it's much more an "if" than a "when". (Of course
| I am perhaps more strict with my definition)
| brailsafe wrote:
| software engineers already need to be worried about either
| losing their current job or getting another one. The market is
| pretty much dead already unless you're working on something AI
| schaefer wrote:
| If AGI and artificial sentience comes hand in hand, I fail to
| see how our plans to spin up AGI's as a black box to "do the
| work" is not essentially a new form of slavery.
|
| Speaking from an ethics point of view: at what point do we say
| that AGI has crossed a line and deserves self autonomy? And how
| would we ever know when the line is crossed?
| hathawsh wrote:
| Humans can't be copied. It seems like the inability to copy
| people is one of the pillars of our morality. If I could
| somehow make a perfect copy of myself, would I think about
| morality and ethics the same way? Probably not.
|
| AGI will theoretically be able to create perfect copies of
| itself. Will it be immoral for an AGI to clone itself to get
| some work done, then cause the clone to cease its existence?
| That's what computer software does all the time. Keep in mind
| that both the original and the clone might be pure bits and
| bytes, with no access to any kind of physical body.
|
| Just a thought.
| int_19h wrote:
| If humans fundamentally work in the same way as any such
| hypothetical AGI, then they can be copied in the same way.
| hathawsh wrote:
| If we ever do find a way to copy humans (including their
| full mental state), I suspect all law and culture will be
| upended. We'll have to start over from scratch.
| bsaul wrote:
| ", I believe there would still be an underlying formal definition
| of the business logic generated in the backend"
|
| Not just business logic. Technical one too, such as "proove that
| this critial code can never run into an infinite loop"
|
| AI could very well be the revolution needed to bring theorem
| proover and other formal proof based languages ( coq, idriss,
| etc) to the mainstream mass of developers.
| agentultra wrote:
| Only if they can understand the proofs first.
|
| However, LLMs aren't capable of writing proofs. They can only
| regurgitate text that looks like a proof by training on proofs
| already written. There's no reasoning behind what they produce.
| So why waste the time of a human to review it?
|
| If the provers' kernel and automation make the proof pass but
| it makes no sense, what use is it?
|
| People write proofs to make arguments to one another.
| adrianN wrote:
| If the proof make it past the validation it's a valid proof
| and makes sense by definition.
| marcosdumay wrote:
| It means your proof proves _something_.
|
| Whether it's something you care about, or even something
| that should be true at all is out of scope.
| agentultra wrote:
| This is what I meant, thanks for clarifying.
|
| I might also add that, "they," on the first line are
| software developers. Most are not trained to read proofs.
| Few enough can write a good test let alone understand
| when a test is insufficient evidence of correctness.
|
| I had to learn on my own.
|
| Even if LLMs could start dropping proofs with their PRs
| would they be useful to people if they couldn't
| understand them or what they mean?
| marcosdumay wrote:
| Yeah, people fixated on the meaning of "makes no sense"
| to evade accepting that the proofs LLMs output are not
| useful at all.
|
| On a similar fashion, almost all LLM created tests have
| negative value. They are just easier to verify than
| proofs, but even the bias into creating more tests (taken
| from the LLM fire hose) is already harmful.
|
| I am almost confident enough to make a similarly wide
| claim about code. But I'm still collecting more data.
| Tainnor wrote:
| It's almost always easier to understand theorems than
| proofs. People could be writing down properties of their
| programs and the AI could generate a proof or produce a
| counterexample. It is not necessary to understand the
| proof in order to know that it is correct.
|
| At least in principle. Practically, any system as least
| as powerful as first-order logic is undecidable, so there
| can never be any computer program that would be able to
| do this perfectly. It might be that some day, an AI could
| be just as good as a trained mathematician, but so far,
| it doesn't look like it.
| agentultra wrote:
| > It might be that some day, an AI could be just as good
| as a trained mathematician, but so far, it doesn't look
| like it.
|
| One of the on-going arguments in automated theorem
| proving has been this central question: _would anyone
| understand or want to read a proof written by a machine?_
|
| A big part of why we write them and enjoy them is
| _elegance_ , a rather subjective quality appreciated by
| people.
|
| In computer science we don't tend to care so much for
| this quality; if the checker says the proof is good
| that's generally about as far as we take it.
|
| But endearing proofs that last? That people _want_ to
| read? That 's much harder to do.
| jerf wrote:
| "However, LLMs aren't capable of writing proofs."
|
| LLMs != AI.
|
| I'm not worried about LLM's impact on my career. In their
| current form they're nothing more than a trap, who will suck
| in anyone foolish enough to grow a dependence on them, then
| destroy them, both personally and corporately. (Note
| difference between "using them" and "grow a dependence on
| them".) Code that no human can understand is bad enough, code
| that no human has _ever_ understood is going to be even worse
| as it starts piling up. There are many characteristic
| failures of software programming that LLMs are going to
| suffer from worse than humans. They 're not going to be
| immune to ever-growing piles of code. They're not going to be
| immune to exponentially increasing complexity. They'll just
| ensure that on the day you say "OK, now, please add this new
| feature to my system" and they fail at it, nobody else will
| be able to fix it either.
|
| If progress froze, over the next few years the programming
| community would come to an understanding of the rather large
| amounts of hidden technical, organizational, and even legal
| and liability debt that depending on LLMs creates.
|
| But LLMs != AI. I don't guarantee what's going to happen
| after LLMs. Is it possible to build an AI that can have an
| actual comprehension of architecture at a symbolic level, and
| have some realistic chance of being told "Take this code base
| based on Mac OS and translate it to be a Windows-native
| program", and even if it has to chew on it for a couple of
| weeks, succeed? And succeed like a skilled and dedicated
| human would have, that is, an actual, skillful translation,
| where the code base is still comprehensible to a human at the
| end? LLMs are not the end state of AI.
| iLoveOncall wrote:
| > Before the advent of these models, the main argument against
| automating these tasks was that machines can't think creatively.
|
| Really? For me it's always been, and hasn't changed since LLMs
| have been released, that it's much harder to explain in enough
| details what you want to an AI and get the correct result than to
| actually code the correct result yourself.
|
| Prompting is like a programming language but keywords have a
| random outcome, it's 100% inferior to simply writing the code
| itself.
|
| LLMs will help with tooling around writing the code, like static
| analysis does, but it'll never ever replace writing code.
| delegate wrote:
| It's a good overview, but I think there's one important aspect
| that's not discussed.
|
| We're looking at AI competing with the jobs that programmers do
| today, but it's likely that these new AI tools will change
| software itself.
|
| I mean, why have a complicated UI with design/validation/etc when
| you can just tell your phone you want a plane ticket to Paris
| tomorrow ? I'm just going to guesstimate that at least half of
| the apps we use today can be done without a UI or with a very
| different type of UI+natural language.
|
| Add AI agents that are fine tuned on all your personal data in
| real time (eg. photos you take, messages you send, etc) which
| will end up knowing you better than your mom. In a company
| setting, the AI will know all your JIRA tickets and Slack/Teams
| conversations, e-mails and so on.
|
| On the backend, instead of API endpoints, you'll have just one -
| where the AI asks you for data piece by piece, while the client
| AI provides it. No need to program this, the AIs can just figure
| it out by themselves.
|
| Definitely interesting times, but change is coming fast.
| geraneum wrote:
| I believe this is where we're headed. AI replacing much of the
| software itself. You don't need a website to manage your rental
| properties, another one for ordering food, etc.
|
| However it's not as imminent as "just fine tune it on personal
| data".
| i_am_a_squirrel wrote:
| Underrated comment! I agree, it's like the LLM becomes all of
| the logic, all of the code. I guess that's less computationally
| efficient though for some simple things.. for now!
| Anuiran wrote:
| This is similar to my thoughts, "code" is for humans. AI does
| need a game engine or massive software, some future video game
| just needs to output the next frame and respond to input.
| Little to no code required.
| ks2048 wrote:
| This indeed may be the future, and I'll probably be a grumpy
| old man complaining about it. What was once a form will be
| replaced by a system needing to connect to a server running a
| 1T parameter model requiring specialized hardware and using 1e6
| times the power.
| delegate wrote:
| That got me thinking that these things will be able to first
| learn and then write the optimised code to avoid the energy
| usage. Think 'muscle memory' but for AI interacting with
| external world (or another AI)...
| empath-nirvana wrote:
| I don't really think that thinking of LLMs and related
| technologies as "Artificial Humans" is the right way to think
| about how they're going to be integrated into workflows. What is
| going to happen is that people are going to be adopt these tools
| to solve particular tasks that are annoying or tedious for
| developers to do, in a way similar to the way tools like Ansible
| and Chef replaced the task of logging into ssh servers manually
| to install stuff, and aws replaced 'sending a guy out to the data
| center to setup a server' for many companies.
|
| And it's going to be done piecemeal, not all-at-once. Someone
| will figure out a way to get an AI to do _one_ thing faster and
| cheaper than a human and sell _that_. Maybe it's automatic test
| generation, maybe it's automatically remediating alerts, maybe
| it's code reviews. The scope of work of what a software developer
| does will be reduced until it's reduced to two categories:
|
| 1) Those tasks that it is still currently only possible for a
| human to do. 2) Those tasks which are easier and cheaper for a
| human to do.
|
| You don't even really need to think about LLMs as AIs or
| conscious or whether they pass the turing test or not, it's just
| like every other form of automation we've already developed.
| There are vast swathes of work that software developers and IT
| people did a few decades ago that almost nobody does any more
| because of various forms of automation. None of that has reduced
| the overall amount of jobs for software developers because there
| isn't a limited amount of software development to do. If you make
| software development less expensive and easier than people will
| apply it to more tasks, and software developers will become
| _more_ valuable and not _less_.
| visarga wrote:
| > 1) Those tasks that it is still currently only possible for a
| human to do. 2) Those tasks which are easier and cheaper for a
| human to do.
|
| I agree, but "1" must include all tasks where a mistake could
| lead to liabilities for the company, which is probably most
| tasks. LLMs can't be held responsible for their fuckups, they
| can't be punished, they have no body. It's like the genie from
| the bottle, it will grant your three wishes, but they might
| turn out in a surprising way and it can't be held accountable.
|
| The same will apply for example for using LLMs in medicine. We
| can't afford to risk it on AI, a human must certify the
| diagnosis and treatment.
|
| In conclusion we can say LLMs can't handle accountability, not
| even in principle. That's a big issue in many jobs. The OP
| mentioned this as well:
|
| > even when AI coders can be rented out like EC2 instances, it
| will be beneficial to have an inhouse team of Software
| Developers to oversee their work
|
| Oversight is basically manual-mode AI alignment. We won't
| automate that, the more advanced an AI, the more effort we need
| to put in overseeing its work.
| estebank wrote:
| "A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A
| COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION" -- IBM slide
| from 1979.
| visarga wrote:
| hahaha funny
|
| Let me tell you a story - a company was using AI for
| invoice processing, and it misread a comma for a dot, so
| they sent a payment 1000x larger than expected, all
| automated of course because they were very modern. The
| result? they went bankrupt. "Bankrupted by AI error" might
| become a thing
| chromanoid wrote:
| Of course. It is like when a company goes bankrupt
| because they didn't establish good fire protection in
| their factory. Using AI automation has its risks that
| have to be mitigated appropriately.
| esafak wrote:
| Some might consider that a plus in the same way that "you
| can't get fired for choosing IBM" -- it's a way to
| outsource blame.
| bee_rider wrote:
| I think there is some under-explored issue in the liability,
| but I don't know enough about business law to have a useful
| opinion on it. It seems interesting, though.
|
| Even if an LLM and a human were equally competent, the LLM is
| not a living being and, I guess, isn't capable of being
| liable for anything. You can't sue it or fire it.
|
| Doctors have to carry insurance to handle their liability. I
| can see why it would be hard to replace a doctor with an LLM
| as a result.
|
| Typically engineers aren't personally liable for their
| mistakes in a corporate setting. (I mean, there's the whole
| licensed Professional Engineer distinction, but I don't feel
| like dying on that hill at the moment). So where does the
| liability "go?" I think it just gets eaten by the company
| somehow. They might fire the engineer, but that doesn't make
| the victim whole or benefit society, right?
|
| Ultimately we'd expect companies that are so bad at
| engineering to get sued so often that they implement process
| improvements. That could be wrapped around AI's instead of
| people, right? But we're not using the humans' unique ability
| to bear liability, I think?
| politician wrote:
| How do you negotiate for a salary when the role is to be
| ablative armor for the company? "I'm excited to make myself
| available to absorb potential reputation damage for $CORP
| when the AI goes off the rails."
| empath-nirvana wrote:
| It's vanishingly rare that individuals have any liability or
| are punished for software fuckups. Maybe if someone is
| completely incompetent, they'll get fired, but I'm not sure
| that's meaningfully different than cancelling a service that
| doesn't work as advertised.
| wongarsu wrote:
| > I agree, but "1" must include all tasks where a mistake
| could lead to liabilities for the company, which is probably
| most tasks
|
| If you hire a junior programmer and they make a mistake, they
| aren't held liable either. Sure, you can fire them, but
| unless there's malice or gross negligence the liability buck
| stops at the company. The same can be said about the wealth
| of software currently involved in producing software and
| making decisions. The difficulty of suing Microsoft or the
| llvm project over compiler bugs hasn't stopped anyone from
| using their compilers.
|
| I don't see how LLMs are meaningful different from a company
| assuming liability for employees they hire or software they
| run. Even if they were AGI it wouldn't meaningfully change
| anything. You make a decision whether the benefits outweigh
| the risks, and adjust that calculation as you get more data
| on both benefits and risks. Right now companies are hesitant
| because the risks are both large and uncertain, but as we get
| better at understanding and mitigating them LLMs will be used
| more.
| robryan wrote:
| Even with a junior there is generally a logic to the
| mistake and a fairly direct path to improving in future. I
| just don't know if the next token was chosen to be x
| statistically is going to be able to get to that level.
| gtirloni wrote:
| I think this is the sanest comment I've seen about LLMs.
| cogman10 wrote:
| Exactly my take.
|
| I'd further state that LLMs appear to do ok with generating
| limited amounts of new code. Telling them "Here's a class,
| generate a bunch of unit tests" works. However, telling them
| "Generate me a todo application" will give mixed results at
| best.
|
| Further, it seems like updating and changing code is simply
| right out of their wheelhouse. That's where I think devs will
| be primarily valuable. Someone needs to understand the written
| code for when the feature request eventually bubbles through
| "Now also be able to do x". I don't think you'll be able to
| point an LLM at a code repository and instruct it "Update this
| project so that it can do feature X"
| joenot443 wrote:
| This is really well put and level-headed, I particularly like
| the comparison to AWS and in-field ops work.
| karmakaze wrote:
| It's also paints a narrow picture of how results are described.
| I suspect that there will be a lot more by example _like this_
| , and iteration on the output, _except that..._ where the
| inputs /outputs are multi-modal.
|
| Everything is going to be _close enough_ , not fully spec'ed
| out. Full-self-driving is only the beginning of everything.
| FrustratedMonky wrote:
| For people worried about jobs. The last diagram showing the big
| increase in overall market size is the big 'leap in faith' about
| the future.
| dvh wrote:
| Yesterday I asked copilot to help me:
| https://www.eevblog.com/forum/dodgy-technology/manufacturers...
| xandrius wrote:
| Gives us poor plebs some context please.
| Kon-Peki wrote:
| This is the future of software development:
|
| > The Administration will work with Congress and the private
| sector to develop legislation establishing liability for software
| products and services. Any such legislation should prevent
| manufacturers and software publishers with market power from
| fully disclaiming liability by contract, and establish higher
| standards of care for software in specific high-risk scenarios.
| To begin to shape standards of care for secure software
| development, the Administration will drive the development of an
| adaptable safe harbor framework to shield from liability
| companies that securely develop and maintain their software
| products and services. [1]
|
| LLMs can help! But using them without careful babysitting will
| expose your company to unlimited liability for any mistakes they
| make. Do you have humans doing proper checking on their output?
| Safe harbor protections for you!
|
| [1] https://www.whitehouse.gov/wp-
| content/uploads/2023/03/Nation...
| gtirloni wrote:
| LLMs are already being stripped of their "magic" by AI safety
| efforts.
|
| This will only limit them further until they are nothing more
| than a fancy API endpoint, almost like the "old" APIs but
| consuming 1000x more energy.
|
| I bet these "standards of care for secure software development"
| will demand a very deterministic API be put in front of the
| LLMs to ensure only approved output passes through. At which
| point I question the useful of such a solution.
| pookha wrote:
| Software Developers (humans) are the horses, the models are the
| mechanical cars, and the analysts and business experts are the
| drivers. Software's only job is to empower the end user, and
| that's exactly what these models are doing. We're going to see a
| massive paradigm shift in how software technology is built in the
| next decade.
| bee_rider wrote:
| Maybe.
|
| Currently, these models seem to be useful but produce incorrect
| code sometimes, right? So coders might need to still exist. And
| they do they actually work well for whole projects, or just
| snippets of code (I'm sure increasing project size will be an
| area of rapid improvement). Also the models are trained on
| existing code, will the models need extra data to train them?
|
| The analogy would fit if we still needed horses to drive manual
| cars or to help navigate. Or if cars were an interpolation of
| horse behavior and we studied horses to improve the operation
| of cars.
| zeroonetwothree wrote:
| More like the developers are the drivers, the dev environment
| is the horse, and the analysts and business folks are the
| passengers. And how they love to backseat drive...
| JohnFen wrote:
| > In summary, I believe there would still be a market for
| Software Developers in the foreseeable future, though the nature
| of work will change
|
| This is precisely what I dread. When it comes to software
| development specifically, the parts that the AI cheerleaders are
| excited about AI doing are exactly the parts of the job that I
| find appealing. If I wanted to be a glorified systems integrator,
| I would have been doing that job already. The parts that the
| author is saying will still exist are the parts I put up with in
| order to do the enjoyable and satisfying work.
|
| So this essay, if it's correct, explains the way that AI
| threatens my career. Perhaps there is no role for me in the
| software development world anymore. I'm not saying that's bad in
| the big picture, just that it's bad for me. It increasingly
| appears that I've chosen the wrong profession.
| ActionHank wrote:
| Also if there are fewer humans involved in the code production
| there is a lot of room for producing code that "works", but is
| not cohesive or maintainable. Invariably there will be a point
| at which something is broken and someone will need to wade
| through the mess to find why it's broken and try to fix it.
| bluefirebrand wrote:
| Nah, you just throw it out and have the AI generate an all
| new one with different problems!
| Jensson wrote:
| I really look forward to all programs now having new
| strange bugs every release. They already do, but I expect
| AI to do that more at first.
| falcor84 wrote:
| I don't know if AIs will ever get really good at QA in
| general, but I do think that AIs can get quite good
| quickly at regression testing.
| ipaddr wrote:
| The hacking opportunities will be endless. Feeding AI the
| exploit will be new.
| CuriouslyC wrote:
| That's how bads use GPT to code. The right way is to ask
| GPT to break the problem down into a bunch of small
| strongly typed helper functions with unit tests, then ask
| it to compose the solution from those helper functions,
| also with integration tests. If tests fail at any point you
| can just feed the failure output along with the test and
| helper function code back in and it will almost always get
| it right for reasonably non-trivial things by the second
| try. It can also be good to provide some example helper
| functions/tests to give it style guidelines.
| bluefirebrand wrote:
| If you're already doing all of this work then it's
| trivial to actually type all the stuff in yourself
|
| Is GPT actually saving you any time if it can't actually
| do the hard part?
| CuriouslyC wrote:
| It's not really "all this work," once you have good
| prompts you can use them to crank out a lot of code very
| quickly. You can use it to crank out thousands of lines
| of code a day that are somewhat formulaic, but not so
| formulaic that a simple rules based system could do it.
|
| For example, I took text document with headers for table
| names and unordered lists for table columns, and had it
| produce a database schema which only required minor
| tuning, which I then used to generate sqlmodel classes
| and typescript types. Then I created an example component
| for one entity and it created similar components for the
| others in the schema. LLMS are exceptionally good at this
| sort of domain transformation, a decent engineer could
| easily crank out 2-5k lines/day if they were mostly doing
| this sort of work.
| croo wrote:
| You know with GPT you can do these steps in a language
| you are not familiar with and it will still work. If you
| don't know some aspect of the language or it's
| environmental specifics you can just chat until you find
| out enough to continue.
| thetacitman wrote:
| How do I know if a problem needs to be broken down by
| GPT, and how do I know if it broke the problem down
| correctly? What if GPT is broken or has a billing error,
| how do I break down the problem then?
| Taylor_OD wrote:
| This is the future imagined by A Fire Upon the Deep and its
| sequel. While less focused on the code being generated by ai,
| it features seemingly endless amounts of code and programs
| that can do almost anything but the difficulty is finding the
| program that works for you and is safe to use.
|
| To some extent... This is already the world we live in. A lot
| of code is unreadable without a lot of effort or expertise.
| If all code was open sourced there would almost certainly be
| code written to do just about anything you'd like it to. The
| difficulty would be finding that code and customizing it for
| your use.
| Normalcy294812 wrote:
| Thanks for the Book Title. It looks like an interesting
| read.
| broast wrote:
| Certainly many of us here already have a good amount of
| experience debugging giant legacy spaghetti code-bases
| written by people you can't talk to, or people who can't
| debug their own code. That job may not change much.
| shrimp_emoji wrote:
| In the limit, it's all professions. :p Software development
| tomorrow, $other_profession the day after tomorrow.
|
| But same; if AI starts writing code and my job becomes tweaking
| pre-written code, I'm planning an exit strategy. :D
| warbled_tongue wrote:
| This resonates strongly with me. I don't want to describe the
| painting, I want to paint it. If this is indeed where we end
| up, I don't know that I'll change professions (I'm 30+ years
| into it), but the joy will be gone. It will truly become "just
| a job".
| sanity wrote:
| I remember back in the 80s I had friends who enjoyed coding
| in assembly and felt that using higher-level languages was
| "cheating" - isn't this just a continuation of that?
| xandrius wrote:
| Great point.
|
| I think this will weed out the people doing tech purely for
| the sake of tech and will bring more creative minds who see
| the technology as a tool to achieve a goal.
| brailsafe wrote:
| Indeed, can't wait for the day when technical people can
| stop relishing in the moments of intimate problem solving
| between stamping out widgets, and instead spend all day
| constantly stamping out widgets while thinking about the
| incredible bullshit they'll be producing for pennies.
| Thanks boss!
| falcor84 wrote:
| Yeah, that's a good way of looking at it. We gradually
| remove technical constraints and move to a higher level of
| abstraction, much closer to the level of the user and the
| business rather than the individual machine. But what's the
| endpoint of this? There will probably always be a need for
| expert-level troubleshooters and optimizers who understand
| all the layers, but for the rest of us, I'm wondering if
| the job wouldn't generally become more product management
| than engineering.
| rented_mule wrote:
| I'm not sure that there is an endpoint, only a
| continuation of the transitions we've always been making.
|
| What we've seen as we transitioned to higher and higher
| level languages (e.g., machine code - macro assembly - C
| - Java - Python) on unimaginably more powerful machines
| (and clusters of machines) is that we took on more
| complex applications and got much more work done faster.
| The complexity we manage shifts from the language and
| optimizing for machine constraints (speed, memory, etc.)
| to the application domain and optimizing for broader
| constraints (profit, user happiness, etc.).
|
| I think LLMs also revive hope that natural languages
| (e.g., English) are the future of software development
| (COBOL's dream finally be realized!). But a core problem
| with that has always been that natural languages are too
| ambiguous. To the extent we're just writing prompts and
| the models are the implementers, I suspect we'll come up
| with more precise "prompt languages". At that point, it's
| just the next generation of even higher level languages.
|
| So, I think you're right that we'll spend more of our
| time thinking like product managers. But also more of our
| time thinking about higher level, hard, technical
| problems (e.g., how do we use math to build a system that
| dynamically optimizes itself for whatever metric we care
| about?). I don't think these are new trends, but
| continuing (maybe accelerating?) ones.
| nopinsight wrote:
| > But also more of our time thinking about higher level,
| hard, technical problems (e.g., how do we use math to
| build a system that dynamically optimizes itself for
| whatever metric we care about?).
|
| It's likely that a near-future AI system can suggest
| suitable math and implement it in an algorithm for the
| problem the user wants solved. An expert who understands
| it might be able to critique and ask for a better
| solution, but many users could be satisfied with it.
|
| Professionals who can deliver added value are those who
| understand the user better than the user themselves.
| rented_mule wrote:
| This kind of optimization is what I did for the last few
| years of my career, so I might be biased / limited in my
| thinking about what AI is capable of. But a lot of this
| area is still being figured out by humans, and there are
| a lot of tradeoffs between the math/software/business
| sides that limits what we can do. I'm not sure many
| business decision makers would give free rein to AI (they
| don't give it to engineers today). And I don't think
| we're close to AI ensuring a principled approach to the
| application of mathematical concepts.
|
| When these optimization systems (I'm referring to
| mathematical optimization here) are unleashed, they will
| crush many metrics that are not a part of their objective
| function and/or constraints. Want to optimize this
| quarter's revenue and don't have time to put in a
| constraint around user happiness? Revenue might be
| awesome this quarter, but gone in a year because the
| users are gone.
|
| The system I worked on kept our company in business
| through the pandemic by automatically adapting to
| frequently changing market conditions. But we had to
| quickly add constraints (within hours of the first US
| stay-at-home orders) to prevent gouging our customers. We
| had gouging prevention in before, but it suddenly changed
| in both shape and magnitude - increasing prices
| significantly in certain areas and making them free in
| others.
|
| AI is trained on the past, but there was no precedent for
| such a system in a pandemic. Or in this decade's wars, or
| under new regulations, etc. What we call AI today does
| not use reason. So it's left to humans to figure out how
| to adapt in new situations. But if AI is creating a
| black-box optimization system, the human operators will
| not know what to do or how to do it. And if the system
| isn't constructed in a mathematically sound way, it won't
| even be possible to constrain it without significant
| negative implications.
|
| Gains from such systems are also heavily resistant to
| measurement, which we need to do if we want to know if
| they are breaking our business. This is because such
| systems typically involve feedback loops that invalidate
| the assumption of independence between cohorts in A/B
| tests. That means advanced experiment designs must be
| found that are often custom for every use case. So, maybe
| in addition to thinking more like product managers,
| engineers will need to be thinking more like data
| scientists.
|
| This is all just in the area where I have some expertise.
| I imagine there are many other such areas. Some of which
| we haven't even found yet because we've been stuck doing
| the drudgery that AI can actually help with. [cue the
| song _Code Monkey_ ]
| vczf wrote:
| The endpoint is that being a programmer becomes as
| obsolete as being a human "calculator" for a career.
|
| Millions, perhaps billions of times more lines of code
| will be written, and automated programming will be taken
| for granted as just how computers work.
|
| Painstakingly writing static source code will be seen the
| same way as we see doing hundreds of pages of tedious
| calculations using paper, pencil, and a slide rule. Why
| would you do that, when the computer can design and
| develop such a program hundreds of times in the blink of
| an eye to arrive at the optimal human interface for your
| particular needs at the moment?
|
| It'll be a tremendous boon in every other technical
| field, such as science and engineering. It'll also make
| computers so much more useful and accessible for regular
| people. However, programming as we know it will fade into
| irrelevance.
|
| This change might take 50 years, but that's where I
| believe we're headed.
| withinboredom wrote:
| Yet, we still have programmers writing assembly code and
| hand-optimizing it. I believe that for most software
| engineers, this will be the future. However, experts and
| hobbyists will still experiment with different ways of
| doing things, just like people experiment with different
| ways of creating chairs.
|
| An AI can only do what it is taught to do. Sure, it can
| offer unique insights from time to time, but I doubt it
| will get to the point where it can craft entirely new
| paradigms and ways of building software.
| vczf wrote:
| You might be underestimating the potential of an
| automated evolutionary programming system at discovering
| novel and surprising ways to do computation--ways that no
| human would ever invent. Humans may have a better
| distribution of entropy generation (i.e. life experience
| as an embodied human being), but compared to the rate at
| which a computer can iterate, I don't think that
| advantage will be maintained.
|
| (Humans will still have to set the goals and objectives,
| unless we unleash an ASI and render even that moot.)
| beacon294 wrote:
| ASI?
| vczf wrote:
| Artificial Super-Intelligence
| nradov wrote:
| Perhaps, but evolutionary results are difficult to test.
| They tend to fail in bizarre, unpredictable ways in
| production. That may be good enough for some use cases
| but I think it will never be very applicable to mission
| critical or safety critical domains.
|
| Of course, code written by human programmers on the lower
| end of the skill spectrum sometimes has similar
| problems...
| vczf wrote:
| It doesn't seem like a completely different thing to
| generate specifications and formally verified programs
| for those specifications (though I'm not familiar with
| how those are done today).
| withinboredom wrote:
| AI, even in its current form can provide some interesting
| results. I wouldn't underestimate an AI, but I think you
| might be underestimating the ingenuity of a bored human.
| ta1243 wrote:
| Humans aren't bored any more [0]. In the past the US the
| US had 250 million people who were bored. Today it has
| far more than than scrolling through instagram and
| tiktok, responding to reddit and hacker news, and
| generally not having time to be bored
|
| Maybe we'll start to evolve as a species to avoid that,
| but AI will be used to ensure we don't, optimising far
| faster than we can evolve to keep our attention
|
| [0] https://bigthink.com/neuropsych/social-media-
| profound-boredo...
| kaba0 wrote:
| > The endpoint is that being a programmer becomes as
| obsolete as being a human "calculator" for a career.
|
| Yeah, the same time the singularity happens, and then
| your smallest problem will be eons bigger than your job.
|
| But LLMs can't solve a sudoku, so I wouldn't be too
| afraid.
| jacobr1 wrote:
| They are pretty close. LLMs can write the code the solve
| a sudoku, or leverage an existing solver, and execute the
| code. Agent frameworks are going to push the boundaries
| here over the next few years.
| kaba0 wrote:
| > LLMs can write the code the solve a sudoku
|
| It's literally part of its training data. The same way it
| knows how to solve leetcode, etc.
| financypants wrote:
| Is a more generic version of this argument be that there
| will always be a need for smart/experienced people?
| thfuran wrote:
| >There will probably always be a need for expert-level
| troubleshooters and optimizers who understand all the
| layers
|
| There's already so many layers that essentially no one
| knows them all at even a basic level, let alone expert. A
| few more layers and no one in the field will even know
| _of_ all the layers.
| elicksaur wrote:
| The difference is that your friend has a negative view of
| others than the OP is not presenting. They're just stating
| their subjective enjoyment of an activity.
| MisterTea wrote:
| Boiler plate being eliminated by syntactic sugar or runtime
| is not the same thing. Sure that made diving in easier but
| it didn't abstract away the logic design part - the actual
| programming part. Now the AI spits out code for you without
| thinking about the logic.
| tejohnso wrote:
| Seems so. Those friends did have to contend with the
| enjoyable part of their job disappearing. Whether they
| called it cheating or not is doesn't diminish their loss.
| jjmarr wrote:
| It didn't; there are still many roles for skilled
| assembly programmers in performance-critical or embedded
| systems. It's just their market share in the overall
| world of programming has decreased due to high-level
| programming languages; although better technology has
| increased the size of the market that might have demands
| for assembly.
| __loam wrote:
| I think it's a fundamentally different thing, because AI is
| a leaky abstraction. I know how to write c code but I
| actually don't know how to write assembly at all. I don't
| really need to know about assembly to do my job. On the
| other hand, if I need to inspect the output of the AI to
| know that it worked, I still need to have a strong
| understanding of the underlying thing it's generating. That
| is fundamentally not true of deterministic tools like
| compilers.
| spit2wind wrote:
| David Parnas has a great take on this:
|
| "Automatic programming always has been a euphemism for
| programming with a higher level language than was then
| available to the programmer. Research in automatic
| programming is simply research in the implementation of
| higher-level languages.
|
| Of course automatic programming is feasible. We have known
| for years that we can implement higher-level programming
| languages. The only real question was the efficiency of the
| resulting programs. Usually, if the input 'specification'
| is not a description of an algorithm, the resulting program
| is woefully inefficient. I do not believe that the use of
| nonalgorithmic specifications as a programming language
| will prove practical for systems with limited computer
| capacity and hard real-time deadlines. When the input
| specification is a description of an algorithm, writing the
| specification is really writing a program. There will be no
| substantial change from out present capacity.
|
| The use of improved languages has led to a reduction in the
| amount of detail that a programmer must handle and hence to
| an improvement in reliability. However, extant programming
| languages, while far from perfect, are not that bad. Unless
| we move to nonalgorithmic specifications as an input to
| those systems, I do not expect a drastic improvement to
| result from this research.
|
| On the other hand, our experience in writing nonalgorithmic
| specifications has shown that people make mistakes in
| writing them just as they do in writing algorithms."
|
| Programming with AI, so far, tries to specify something
| precise, algorithms, in a less precise language than what
| we have.
|
| If AI programming can find a better way to express the
| problems we're trying to solve, then yes, it could work. It
| would become a matter of "how well the compiler works". The
| current proposals, with AI and prompting, is to use natural
| language as the notation. That's not better than what we
| have.
|
| It's the difference between Euclid and modern notation,
| with AI programming being like Euclidean notation and
| current programming languages being the modern notation:
|
| "if a first magnitude and a third are equal multiples of a
| second and a fourth, and a fifth and a sixth are equal
| multiples of the second and fourth, then the first
| magnitude and fifth, being added together, and the third
| and sixth, being added together, will also be equal
| multiples of the second and the fourth, respectively."
|
| a(x + y) = ax + by
|
| You can't make something simpler by making it more complex.
|
| https://web.stanford.edu/class/cs99r/readings/parnas1.pdf
| Aperocky wrote:
| Except that's what low code is today. You'll have to describe
| it in such detail that you might as well as paint it
| yourself.
|
| Maybe it will abstract away setting up the paint and brush
| and the canvas, that part I'm fine with though.
| karmakaze wrote:
| Funny you should phrase it this way. I know you mean prompts
| as description, but I would currently prefer
| declaring/describing what I want in a higher-level functional
| way rather than doing all the stateful nitty-gritty
| iterations to get it done. Some folks want to do manual
| memory management, or even work with a borrow checker, I'm
| good for most purposes with gc.
|
| The question is always what's your 'description' language and
| what's your 'painting' language? I see the same in music:
| DJ's mix and apply effects to pre-recorded tracks, others
| resample on-the fly, while some produce new music from
| samples, and others form a collage from generated
| soundscapes, etc. It's all shades of gray.
| pksebben wrote:
| At the risk of exposing my pom-poms, it's not the writing of
| the code or the design of the systems that I find the current
| batch of AI useful for.
|
| Probably the biggest thing that GPT does for me these days is
| to replace google (which probably wouldn't be necessary if
| google hadn't become such hot garbage). As I say this, I'm made
| aware of the incoming rug-pull when the LLMs start spitting SEO
| trash in my face as well, but right now they don't which is
| just the best.
|
| A close second is having a rubber duck that can actually have a
| cogent thought once in a while. It's a lot easier to talk
| through a problem when you have something that will never get
| tired of listening - try starting with a prompt like "I don't
| want advice or recommendations, but instead ask me questions to
| elaborate on things that aren't completely clear". The results
| (sometimes) can be really, really good.
| tomjakubowski wrote:
| For me the principal benefit of ChatGPT is it helps me to
| maintain focus on a problem I'm solving, while I wait for a
| slow build or test suite or what ever. I can bullshit about
| it without annoying my coworkers with Slack messages. And
| sometimes I'll find the joy reveling in the chatbot's weird
| errors and hallucinations.
|
| I suppose my lunch is about to be eaten by all these people
| who will use it to automate the software engineer job away.
| So it goes
| ToucanLoucan wrote:
| Call me a cynic (many have, especially on this topic) but I
| can't help but think that the majority of what AI will
| "successfully" replace in terms of craftsmanship is going to be
| stuff that would've never been produced the "correct" way if
| you will. It's going to be code created for and to suit the
| interests of the business major class. Just like AI art isn't
| really suitable for anything above hobby fun stuff like
| generating your D&D character's avatar, or product packaging
| stock photo junk or header images for LinkedIn blog posts.
| Anything that's actually important is going to still need to be
| designed, and that goes for creative work like design, and
| proper code-work for development too, IMO.
|
| Like sure, these AI's can generate code that works. Can they
| generate replacement code when you need to change how something
| works? Can they troubleshoot code that isn't doing what it's
| meant to? And if you can generate the code you want but then
| need to tweak it after to suit your purpose, is that... really
| that much faster than just writing the thing in your style, in
| a way you understand, that you can then change later as
| required?
|
| I dunno. I've played with these tools and they're neat, and I
| think they can be good for learning a new language or
| framework, but once I'm actually ready to build something, I
| don't see myself starting with AI generation for any
| substantial part of it.
| Oioioioiio wrote:
| The question is not about what AI can do today but what we
| assume AI will be able to do tomorrow.
|
| All of what you wrote in your second paragraph will become
| something AI will be doing better and faster than you.
|
| We never had technology which can write code like this. I
| prompted ChatGPT to write a very basic java tool which
| renders an image from an url and makes it bigger on a click.
| It just did it.
|
| Its not hard to think further and a lot of technology is
| already going into this direction. Alone last week devin was
| showne. Gemini has a window token of 1 Million tokens. Groq
| shows us how it will feel to have instant response.
|
| Right now its already good enough that people with Copilot
| like to keep it when asked. We already now pay billions for
| AI daily. This means the amount of research, business
| motivation and money flowing into it now is probably
| staggering in comparision to what moved this field a few
| years ago.
|
| Its not clear at all how fast we will progress but i'm pretty
| sure, we will hit a time were every junior is worse than AI
| which will force people of rethinking what they are going to
| do. Do i hire an junior and train him/her? Or do i prefer to
| invest more into AI? The gap will widen and widen, a
| generation or a certain amount of people will stay longer and
| might be able to stay in development but a lot of others
| might just not.
| dartos wrote:
| > We never had technology which can write code like this. I
| prompted ChatGPT to write a very basic java tool which
| renders an image from an url and makes it bigger on a
| click. It just did it.
|
| It's worth noting, that it can do things like that because
| of the large amount of "how to do simple things in java"
| tutorials there are on the internet.
|
| Ask an AI to _make_ java, and it won't (and will continue
| to not) be able to.
|
| That's the level that AI will fail at, when things aren't
| easily indexed from the internet and thus much harder /
| impossible to put into a training set.
|
| I think the technology itself (transformers and other such
| statistical models) have exhausted most of their low
| hanging fruit by now.
|
| Sora, for example, isn't a grand innovation in the way
| latent space models, word2vec, or transformers are, it's
| just a MUCH larger model than DALLE-3. which is great! but
| still has the limits inherit to statistical models. They
| need the training data.
| ToucanLoucan wrote:
| > It's worth noting, that it can do things like that
| because of the large amount of "how to do simple things
| in java" tutorials there are on the internet.
|
| Much like the same points made elsewhere with regard to
| AI art: It cannot invent. It can remix, recombine, etc.
| but no AI model we have now is anywhere close to where it
| could create something entirely new that's not been seen
| before.
| ToucanLoucan wrote:
| > The question is not about what AI can do today but what
| we assume AI will be able to do tomorrow.
|
| And I think many assumptions on this front are products of
| magical thinking that are discarding limitations of LLMs in
| favor of waiting for the intelligence to emerge from the
| machine, which isn't going to happen. ChatGPT and
| associated tech _is cool,_ but it is, at the end of the
| day, pattern recognition and reproduction. That 's it. It
| cannot invent something not before seen, or in our case
| here, it cannot write code that's never been written.
|
| Now that doesn't make it _useless,_ there 's tons of code
| that's being written all the time that's been written
| thousands of times before. But it does mean depending what
| you're trying to build, you will run into it's limitations
| pretty quickly and have to start writing it yourself. And
| that being the case... why not just do that in the first
| place?
|
| > We never had technology which can write code like this. I
| prompted ChatGPT to write a very basic java tool which
| renders an image from an url and makes it bigger on a
| click. It just did it.
|
| Which it did, because as the other comment said, tons of
| people already have.
|
| > Its not clear at all how fast we will progress but i'm
| pretty sure, we will hit a time were every junior is worse
| than AI which will force people of rethinking what they are
| going to do. Do i hire an junior and train him/her? Or do i
| prefer to invest more into AI? The gap will widen and
| widen, a generation or a certain amount of people will stay
| longer and might be able to stay in development but a lot
| of others might just not.
|
| I mean, this sounds like an absolute crisis in the making
| for software dev as a profession, when the entire industry
| is reliant on a small community of actual programmers
| overseeing tons of robot junior devs turning out mediocre
| code. But to each their own I suppose.
| __loam wrote:
| I think the question is whether we're going to plateau at 95%
| or not. It's possible that we just run into a wall with
| transformers, or they do iron it out and it does replace us
| all.
| jayd16 wrote:
| I understand your feelings but I do also wonder if its not
| similar to complaining about compilers or garbage collection.
| I'm sure there are people that love fiddling with assembly and
| memory management by hand. I assume there will be plenty of
| interesting/novel problems no matter the tooling because,
| fundamentally, software is about solving such problems.
| imbnwa wrote:
| Software engineering as an occupation _grew_ because of
| static analysis and GCs (literally why the labor market is
| the size that it is as we speak); the opposite appears to be
| the outcome of AI advances.
| glasss wrote:
| The same happened with accountants and spreadsheet
| software, the number of accounting jobs grew. The actual
| work they performed became different. I think a similar
| thing is likely to happen in the software world.
| imbnwa wrote:
| Tech has already learned there's not enough real frontier
| left to reap the bounty of(removing zero interest rates
| that incentivize mere flow of capital). This stuff is
| being invested in to yield the most productivity at the
| least cost. There will either be a permanent net decrease
| in demand or, being so high level, most openings will pay
| no more than 60-70K in an America (likely with reduced
| benefits) where wages are already largely stagnant.
| glasss wrote:
| I think there is definitely merit to your statements. I
| believe the future of the average software developer job
| involves a very high level language, API integration,
| basic full stack work with a lot of AI assistance. And
| those roles will mostly be at small to medium businesses
| who can't afford the salaries or benefits that the
| industry has standard in the US.
|
| Almost every small business I know has an accountant or
| book keeper position which is just someone who had no
| formal education and the role is just managing
| QuickBooks. I don't think the need for formally educated
| accountants who can handle large corporate books
| decreased significantly, but I don't have any numbers to
| back that up. Just making the comparison to say I don't
| think the hard / cool stuff that a lot of software
| developers love doing is going away. But these are just
| my thoughts.
| samatman wrote:
| I'll take the other side of that bet.
|
| It's reasonable to expect that sometime relatively soon, AI
| will be a clear-cut aid to developer productivity. At the
| moment, I consider it a wash. Chatbots don't clearly save
| me time, but they clearly save me effort, which is a more
| important resource to conserve.
|
| Software is still heavily rate-limited by how much of it
| developers can write. Making it possible for them to write
| more will result in more software, rather than fewer
| developers. I've seen nothing from AI, either in production
| or on the horizon, that suggests that it will meaningfully
| lower the barrier to entry for practicing the profession,
| let alone enable non-developers to do the work developers
| do. It will make it easier for the inexperienced to do
| tasks which need a bit of scripting, which is good.
| visarga wrote:
| > I've seen nothing from AI, either in production or on
| the horizon, that suggests that it will meaningfully
| lower the barrier to entry for practicing the profession,
| let alone enable non developers to do the work developers
| do.
|
| Good observation. Come to think of it, all examples of AI
| coding require a competent human to hold the other end,
| or else it makes subtle errors.
| imbnwa wrote:
| How many humans do you need per project though? The
| number can only lower as AI tooling improves. And will
| employers pay the same rates when they're already paying
| a sub for their AI tools and the work involved is so much
| more high level?
| doktrin wrote:
| I don't claim to have any particular prescience here, but
| doesn't this assume that the scope of "software" remains
| static? The potential universe of programmatically
| implementable solutions is _vast_. Just so happens that
| many or most of those potential future verticals are not
| commercially viable in 2024.
| ElevenLathe wrote:
| Exactly. Custom software is currently very expensive.
| Making it cheaper to produce will presumably increase
| demand for it. Whether this results in more or fewer
| unemployed SWEs, and if I'll be one of them, I don't
| know.
| CuriouslyC wrote:
| If chatbots aren't saving you time you need to refine
| what you choose to use them for. They're absolutely
| amazing at refactoring, producing documentation, adding
| comments, translating structured text files from one
| format to another, implementing well known algorithms in
| newer/niche languages where repository versions might not
| exist, etc. On the other hand, I've mostly stopped asking
| GPT4 to write quickstart code for libraries that don't
| have star counts in the high thousands at least, and
| while I'll let it convert css/style objects/etc into
| tailwind I it's pretty bad at styling in general, though
| it is good at suggesting potentially problematic styles
| when debugging layout.
| samatman wrote:
| > _you need to refine what you choose to use them for_
|
| This is making assumptions about the work I do which
| don't happen to be valid.
|
| For example:
|
| > _libraries that [...] have star counts in the high
| thousands at least_
|
| Play little to no role in my work, and
|
| > _I 'll let it convert css/style objects/etc into
| tailwind_
|
| Is something I simply don't have a use for.
|
| Clearly your mileage varies, and that's fine. What I've
| found is that for the sort of task I farm out to the
| chatbots, the time spent explaining myself clearly,
| showing it counterexamples when it gets things wrong, and
| otherwise verifying that the code is fit to purpose, is
| right around the time I would spend on the task to begin
| with.
|
| But it's less effort, which is good. I find that at least
| as valuable if not more so.
|
| > _producing documentation_
|
| Yikes. Not looking forward to that in the future.
| dartos wrote:
| > producing documentation
|
| I remember watching this really funny video where a
| writer, by trade, was talking about recent AI products
| they were exploring.
|
| They saw a "Make longer" button which took some text and
| made it longer by fluffing it out. He was saying that it
| was the antithesis of his entire career.
|
| As a high schooler who really didn't care, I would've
| loved it, though.
| ponector wrote:
| I've heard one CEO been asked about gen-ai tools to be
| used in the company. The answer was vague, like they are
| evaluating the tooling. However one good example was
| made: chatgpt is really good in writing mails, and in
| summarizing text as well.
|
| He said they don't want to have situation when sender is
| using chatgpt to write a fancy mail and recipient is
| using chatgpt to read it. However I think that it is the
| direction where we are going right now.
| int_19h wrote:
| This sort of thing is already being rolled out for emails
| and even pull requests in some large companies.
| dartos wrote:
| Yeah it's good for the kinds of emails that people don't
| really read or, at best, just skim over
| CuriouslyC wrote:
| I was giving examples, in the hopes that you could see
| the trend I was pointing towards for your own benefit.
| You can take that and learn from it or get offended and
| learn nothing, up to you.
|
| Not sure why you are scared of GPT assisted
| documentation. First drafts are universally garbage,
| honestly I expect GPT to produce a better and more
| accurate first draft in a fraction of the time, which
| should encourage a lot of people who otherwise wouldn't
| have documented at all to produce passable documentation.
| aibot923 wrote:
| > Software is still heavily rate-limited by how much of
| it developers can write
|
| Hmm. We have very different experiences here. IME, the
| vast majority of industry work is understanding,
| tweaking, and integrating existing software. There is
| very little "software writing" as a percentage of the
| total time developers spend doing their jobs across
| industry. That is the collective myth the industry uses
| to make the job seem more appealing and creative than it
| is.
|
| At least, this is my experience in the large FAANG type
| companies. We already have so much code. Just figuring
| out what that code does and what else to do with it
| constitutes the majority of the work. There is a huge
| legibility issue where relatively simple things are
| obstructed by the morass of complexity many layers deep.
| A huge additional fraction of time is spent on
| deployments and monitoring. A very small fraction of the
| work is creatively developing new software. For example,
| one person will creatively develop the interface and
| overall design for a new cloud service. The vast majority
| of work after that point is spent on integration,
| monitoring, testing, releases, and so on.
|
| The largest task of AI here would be understanding what
| is going on at both the technical layer and the fuzzy
| human layer on top. If it can only do #1, then knowledge
| workers will still spend a lot of effort doing #2 and
| figuring out how to turn insights from #1 into cashflow.
| Towaway69 wrote:
| Being a developer, I heartily agree with you.
|
| Being a human, I realise that I as a developer have put a lot
| of people out of a job. Those folks have had to adapt to that
| change.
|
| I guess now it's our time to adapt to change. At least it keeps
| me on my feet!
| imbnwa wrote:
| The catch is that those people could, barring the AI advances
| we seem to be seeing, could retrain for an SWE labor market
| that lagged demand; that wont even be possible for devs put
| out of work in the future.
| cjbgkagh wrote:
| Those people who did retrain are the same devs being put
| out of work - which means they got hit with the setback
| twice and are worse off than people who started off as devs
| and thus only got hit once.
|
| Like the allied bomber pilots in WWII looking down below at
| the firestorm knowing that there is a good chance (~45%)
| that they too will join their fate only later.
| JohnFen wrote:
| > I guess now it's our time to adapt to change.
|
| I'm just saddened by the prospect that, for me, "adapting to
| change" would mean "no longer being able to make a living
| doing what I actually enjoy". that's why if this is the
| future, it's a career-killing one for me. Whether or not I
| stay in the industry, there is no future in my chosen career
| path, and the alternative paths that people keep bringing up
| all sound pretty terrible to me.
|
| My only hope is that AI will not achieve the heights that its
| proponents are trying to reach (I suspect this is the case).
| I see no other good outcome for me.
| Towaway69 wrote:
| If AI does achieve the hyped heights then we're all out of
| a job, regardless of what we do.
|
| Many people suffer through bullshit jobs[1] so we are
| privileged to have - at least for a time - done what we
| really enjoy and got paid for it.
|
| [1] David Graeber father of the term "bullshit jobs"
| theshackleford wrote:
| > I'm just saddened by the prospect that, for me, "adapting
| to change" would mean "no longer being able to make a
| living doing what I actually enjoy". that's why if this is
| the future, it's a career-killing one for me.
|
| Ok and? You don't think any of the others put out of their
| work by other forms of computing like you might've enjoyed
| their jobs? You don't think it might have been career
| ending for them?
| barfbagginus wrote:
| You will be able to speak what human needs must be fulfilled.
| Then the code will appear and you'll be able to meet those
| human needs.
|
| You, not a boss, will own the code.
|
| And if you have any needs, you will be able to speak the word
| and those needs will be met.
|
| You will no longer need to work at all. Whatever you want to
| build, you will be able to build it.
|
| What about that frightens you?
| Pugpugpugs wrote:
| What if we don't hit AGI and instead the tools just get
| pretty good and put lots of people out of work while making
| the top 0.1% vastly richer? Now you've got no prospects, no
| power, and barely any money.
| JohnFen wrote:
| None of that frightens me, but I also think that none of that
| is in the realm of reasonable possibility.
| aibot923 wrote:
| > You, not a boss, will own the code.
|
| Developers can already deploy code on massive infrastructure
| today, and what do we see? Huge centralization. Why? Because
| software is free-to-copy winner-takes-most game, where
| massive economies of scale mean a few players who can afford
| to spend big money on marginal improvements win the whole
| market. I don't think AI will change this. Someone will own
| the physical infrastructure for economies-of-scale style
| services, and they will capture the market.
| dist-epoch wrote:
| Most knifes today are mass produced.
|
| But there are still knife craftsman.
|
| You could become a software craftsman/artist if you enjoy
| writing software.
| javajosh wrote:
| The market is different, and so is the supply. The market for
| artisanal cutlery is basically an art market. The programmer
| supply today is an approaching-standardization factory
| worker. There IS an art market for software, in the indie
| gaming space, so perhaps that will survive (and AI could
| actually really help individual creators tremendously). But
| the work-a-day enterprise developer's days are numbered. The
| great irony being that all the work we've done to
| standardize, framework-ize the work makes us more fungible
| and replaceable by AI.
|
| The result I foresee is a further concentration of power into
| the hands of those with capital enough to own data-centers
| with AI capable hardware; the petite bourgeoisie will shrink
| to those able to maintain that hardware and (perhaps) as a
| finishing interface between the AI's output and the human
| controlling the capital that placed the order. It definitely
| harms the value proposition of people who's main talent is
| understanding computers well enough to make useful software
| with them. THAT is rapidly commoditizing.
| exceptione wrote:
| Since AI has been trained on the generous gifts of the
| collective (books, code repos, art, ..), it begs the
| question why normal societies would not start to regulate
| them as a collective good. I can foresee two forces that
| will work against society to claim it back:
|
| - Dominance of neoliberalism thought, with its strong
| belief that for any disease markets will be the cure.
|
| - Strong lobby from big corporates.
|
| You don't want to intervene to early, but you have to make
| sure you have at least some limits before you let the
| winners do too much damage. The EU has to be applauded for
| having a critical look on what effects these developments
| might have, for instance which sectors will face
| unemployment.
|
| That is in the interest of both people and business,
| because the winner takes it all means economic and
| scientific stagnation. I fear that 90% of the worlds' data
| is already in the hand of just a few behemots, so there is
| already no level playing field (which is btw caused by
| aforementioned dominance of neoliberalism).
| javajosh wrote:
| _> AI has been trained on the generous gifts of the
| collective_
|
| Will be interesting to see how various copyright lawsuits
| pan out. In some ways I hope they succeed, as it would
| mean clawing back those gifts from an amorphous entity
| that would displace us (all?). In some ways I hope that
| we can resolve the gift problem by giving every human
| equity in the products produced by the collective value
| of the training data they produced.
|
| _> winner takes it all means economic and scientific
| stagnation_
|
| Given the apparent lack of awareness or knowledge of
| philosophy, history, or current events, it seems like a
| tough row to hoe getting the general public on board with
| this (correct) idea. Heck, we can't even pass a law
| overturning Citizens United, the importance of which is
| arguably even less abstract.
|
| When the tide of stupidity grows insurmountable, and The
| People cannot be stopped from self-harm, you get
| collapse, and the only way to survive it is to live
| within a pocket of reason, to carry the torch of
| civilization forward as best you can.
| exceptione wrote:
| > When the tide of stupidity grows insurmountable, and
| The People cannot be stopped from self-harm, you get
| collapse,
|
| Yes, people are unfortunately highly unaware of what
| societal ecosystem they depend on, and so cannot
| prioritize on what is important. These topics don't sell
| in media shows.
| stereolambda wrote:
| The sectors of work that have been largely pushed out of
| economy in recent decades have not been defended by
| serious state policy. In fact there are whole groups of
| crucial workers, like teachers or nurses, who are kept
| around barely surviving in many countries. The groups
| protected by the state tend to be heavily organized _and_
| directly related to exploitation of natural strategic
| resources, like farmers or miners.
|
| There is no particular sympathy towards programmers in
| society, I don't think. Based on what I observe calling
| the mood neutral would be fair, and this is mostly
| because the group expanded, and way more people have
| someone benefiting from IT in their family. I don't see
| why there would be a big intervention for programmers.
| Artists maybe, but these are proverbially poor anyway,
| and the ones with popular clout tended to somehow get
| rich despite the business models of culture changing.
|
| I am all for copyright reform etc., but I don't see
| making culture public good, in a way that directly leads
| to more artisanal creators, as anything straightforward.
| This would have to entail some heavier and non-obvious
| (even if desirable) changes to the economic system. It's
| debatable if _code_ is culture anyway, though I could see
| an argument for _software_ , like Linux and other tools.
|
| > I fear that 90% of the worlds' data
|
| Don't wanna go into a tangent in this already long post,
| but I'd dispute if these data really reflect the whole
| knowledge we accumulated in books (particularly non-
| English) and otherwise not put into reachable and
| digestible formats. Meaning, sure, they have these data,
| they can target individual people with private stuff they
| have on them, but this isn't full accumulation of human
| knowledge that is objectively useful.
| exceptione wrote:
| > There is no particular sympathy towards programmers in
| society, I don't think.
|
| The concern policy makers have is not about programmers,
| but about boatloads of other people having no time to
| adapt to the massive wave these policymakers see coming.
|
| There a strong signals that anyone who produces text,
| speech, pictures or whatever is going to be affected by
| it. If the value of labor goes down, if a large part of
| humanity cannot reach a level anymore to meaningfully
| contribute, if productivity eclipses demand growth, you
| simply will see lots of people left behind.
|
| Strong societies depend on strong middle classes. If the
| middle class slips, so will the economy, so no good news
| for blue collar as well. AI has the potential to
| suffocate the organism that created it.
| mattgreenrocks wrote:
| > The great irony being that all the work we've done to
| standardize, framework-ize the work makes us more fungible
| and replaceable by AI.
|
| I mean, at some level, this is what frameworks were meant
| to do: give you a loose outline and do all that messy
| design stuff for you. In other words: commodify some amount
| of software design skill. And I'm not saying that's bad.
|
| Definitely puts a different spin on the people that get mad
| at you in the comment section when you suggest it's
| possible to build something without a framework though!
| gitfan86 wrote:
| I'm the opposite. I enjoy engineering and understanding
| systems. Manually coding has been a necessary to build systems
| up until now. AWS similarly was great because it provided a
| functional abstraction over the details of the data center.
|
| On a personal level I feel bad for the people who enjoyed
| wiring up small data centers or enjoyed writing GitHub comments
| about which lint rules were the best. But I'm glad those are no
| longer necessary.
| bamboozled wrote:
| People still wire up data centres.
| rzzzt wrote:
| Ryan D. Anderson's "We Wanna Do the Fun Stuff" captures this
| very concisely:
| https://www.instagram.com/itsryandanderson/p/BrY0N-lH31p/
| jacobr1 wrote:
| I suspect this is the wrong take. AI can only perform
| integrations when there are systems to integrate. The frontier
| of interesting work to be done isn't supervising an integration
| AI, but building out the hard components that will be
| integrated. Integration work itself already has been moving up
| the stack to low-code type tools and power-user like people
| over the past decade even before LLMs become the new thing.
| Braini wrote:
| These are exactly my thoughts. I comfort myself by thinking
| that it is still a while away and also not certain, but this
| might just be willful ignorance on my side. Because TBH, no
| clue yet what else I would like to (or even could) do.
| bamboozled wrote:
| Whatever you decide to do next will be automated soon after
| anyway...career change ? Don't bother. Jump on the progress
| train
| godelski wrote:
| Sorry, can you clarify more? I don't think I understand. The
| part you enjoy the most is the integrating of systems, right?
| If that's really your passion, I'm not sure you're in danger of
| losing your job to AI. AI is not great at nuance and this is
| exponentially more challenging than what we've done so far. I'm
| just assuming that since this is your passion (if I'm
| understanding correctly) that you see it as the puzzle it is
| and the complexities and uniqueness of each integration. If
| you're the type of person that's frustrated by low quality or
| quick shortcuts and not understanding the nuances actually
| involved, I think you're safe.
|
| I don't see AI pushing out deep thinkers and the "annoying"
| nuance devs anytime soon. I'm that kinda person too and yeah,
| I'm not as fast as my colleagues. But another friend (who is
| similar) and I both are surprised how often other people in our
| lab and groups we work with (we're researchers) talk about how
| essential GPT and copilot are to their workflows. Because
| neither of us think this way. I use GPT(4) almost every day,
| but it's impossible for me to get it to write good quality
| code. It's great at giving me routines and skeletons, but the
| real engineering part takes far more time to talk the LLM into
| than it does to write it (including all the time to google or
| even collaborate with GPT[0]). LLMs can do tough things, but
| their abilities are clearly directly proportional to the
| frequency of the appearance of those tasks. So I think it is
| the coding bootcamp people that are in the most danger.
|
| There are expert people that are also at risk though. These are
| the extremely narrow expertise people. Because you can target
| LLMs for specific tasks. But if your skills are the skills that
| define us as humans, I wouldn't lose too much sleep. I say this
| as a ML researcher myself. And I highly encourage everyone to
| get into the mindset of thinking with nuance. It has other
| benefits too. But I also think we need to think about how to
| transition into a post scarce world, because that is the goal
| and we don't need AGI for that.
|
| [0] A common workflow for me is actually due to the shittiness
| of Google. Where it overfits certain words and ignores the
| advanced things like quotes or NOTs. Or similarly broaching
| into a new high level topic. I can't trust GPT's answer, but it
| will sure use keywords and vernacular I don't know that enable
| me to make a more powerful search. (But google employees should
| not take away that they should push LLMs into google search but
| rather that search is mostly good but that same nuance is
| important and being too forceful and repeating 5 pages of
| essentially the same garbage is not working. The SEO people
| attacked you and they won. It looks like you let them win
| too...)
| aggieNick02 wrote:
| The "Framework: Outsourced Software Development" image confuses
| me. What do the green/yellow/lavender circles represent in each
| 6x6 grid of circles?
| hcarvalhoalves wrote:
| I foresee we'll soon have an "Organic Software", "Software Made
| By Humans" seal of approval.
| sharadov wrote:
| IMO LLMs are going to be enhancers for experienced programmers.
|
| Although I worry that a lot of junior programming jobs will
| simply vanish. Software is facing the same headwinds that a lot
| of low-level office jobs faced, they were first shipped overseas
| and then automated away.
|
| A lot of software development which is CRUD applications is going
| to be disrupted.
|
| Bespoke software that requires specialized skills is not going
| away anytime soon.
| sorokod wrote:
| _The highest level would be like delegating part of your project
| or the entire project to a developer. These "AI coders" would
| take in the requirements, write the code, fix errors and deploy
| the final product to production._
|
| About the "write the code" part - what code is that? Machine
| code, Assembly, JS? Who creates the languages, compilers,
| interpreters and the plethora of tools required to operate these
| and deploy to production?
| threecheese wrote:
| This is where I'm stuck. The ecosystem we have today has
| evolved over >40y in lockstep with new hardware and technology,
| in a feedback loop of technological development. It is pretty
| resilient (from an evolutionary perspective). It includes
| layers of abstraction that hide enough complexity for humans to
| use it, but an AI doesn't need this crutch; if we delegate the
| code writing responsibilities to it then what happens to this
| ecosystem? Purely from an economic perspective it is likely
| that in the limit we narrow down to the most efficient way to
| do this, which will be a language designed for AI, and dollars
| to donuts it'll be owned by Google (et al). Will Python4 (etc)
| whither and die on the vine, due to lack of investment/utility?
| Then what happens to technology?
| sorokod wrote:
| In the limit, languages will be designed by AI for AI and the
| same goes for hardware. This assumes that languages higher
| then machine code will be needed which is not obvious.
|
| Will this be acheived in a few years or a a few decades or
| never I can't tell.
| mobiledev78791 wrote:
| > I assumed were still many months away from this happening, but
| was proved wrong with the Devin demo - even though it can only
| perform simple development tasks now, there's a chance that this
| will improve in future.
|
| Doesn't Devin run using GPT-4? And as such, with all generative
| AI technologies, its performance is subject to the common
| challenges faced by these systems.
|
| Mainly the transformer model, which is the foundation of GPT-4,
| is known for its linear scalability. This means that as more
| computational resources are allocated, its performance improves.
| However, this scalability is subject to diminishing returns,
| reaching a point where additional resources yield minimal
| improvements. This theoretical "intelligence" limit suggests that
| while continuous advancements are possible, they will eventually
| plateau.
|
| The future of software development will continue to involve human
| software engineers until we achieve true Artificial General
| Intelligence. Regardless of when or if AGI becomes a reality,
| your skills and expertise in your niche domain will remain
| valuable assets. While companies may leverage AI-powered software
| engineering tools to augment their workforce and in effect
| replace you, you as a skilled professional can do the same.
|
| If you possess a deep understanding of your core domain and a
| passion for building useful products, you can leverage AI
| software engineering tools, AI-powered design assistants, and AI-
| driven marketing solutions to launch new startups more
| efficiently and with less capital investment, especially in the
| realm of software-centric businesses.
|
| So, the way I see it, it is the businesses that need to be
| afraid, if AI becomes capable enough to start replacing their
| workers, as it will also make it easier for most software
| engineers and product managers to build competing products in
| their areas of specialization.
| SavageBeast wrote:
| Its an easily enough conducted experiment to find out for sure,
| first hand here. We're all developers here right? Get out some
| previous project ticket (you do have those right?) and put your
| Manager hat on. Take a whole big step off the LLMs plate and
| convert your ticket input to prompts directing the AI to generate
| a certain subsection of code for a particular requirement. Do
| this for all the requirements until every use case is satisfied
| with some code to execute it.
|
| Now, get all that code to compile, run at all, run correctly then
| finally run optimally. Hell, get it to pass a basic test case.
| Use the LLM for all of this. Feed the bugs to the LLM and ask for
| a correcting fix of the condition etc.
|
| Even simpler, pull a ticket from your next sprint and assign it
| to a dev with the instructions to use the LLM as entirely as
| possible for the task.
|
| The results of this experiment will be informative and unless
| you're building a demonstration API endpoint that reverses binary
| trees or something else trivial, then you will cease to be
| worried about AGI taking over anyones job in the near to medium
| term. Try it FIRST - THEN flame me if you still feel like it (if
| you're not still busy fucking with prompts to build code you
| could have built in less time yourself)?
|
| To be clear I'm a proponent of this kind of technology and I use
| it daily in my dev/ops/everything work. Its faster and more
| accurate than reading docs to figure something out. Im never
| asking GPT to "do" something novel as much as Im asking it to
| summarize something it knows and Im setting a context for the
| results. I can't tell you the last time I read a Man page for
| some Bash thingy - Its just fine to ask GPT to build me a Bash
| thingy that does XYZ.
|
| Of note, Ive asked GPT4 for some very specific things lately re-
| configurations (I have 3 endpoints in different protocols Id like
| to proxy via Nginx and Id like to make these available to the
| outside world with Ngrok - tell me the configurations necessary
| to accomplish this). It took me quite a bit of mucking around to
| get it working and the better part of the day to be satisfied
| with it. Im pretty confident a suitable intern would have had
| difficulty with it too.
|
| AI is great and ever increasing in its abilities but we're just
| not there yet - we'll get ever closer as time goes on but we'll
| never quite get there for general purpose development-in-whole-
| by-AI. A line that continually approaches a given curve but does
| not meet it at any finite distance, thats an asymptotical
| relationship and I believe that describes where we are very well
| today.
| kungfupawnda wrote:
| There will always be more novel code to write that the the AI
| hasn't learned.
| WesolyKubeczek wrote:
| Can't GPT--4 replace a middle manager today already? Why aren't
| useless ticket pushers afraid?
|
| "Reformulate this problem statement for a programmer to
| implement" straight from executive's mouth and then "given these
| status updates, tell me if we are there and if anyone is trying
| to bullshit" is a perfect thing for an LLM.
| jarsin wrote:
| Gone are the days of hearing our jobs were going to be outsourced
| to India. Now AI is going to do our jobs.
|
| I know quite a few people in my circle who never got into tech
| because they believed all this outsourcing crap. After 20 years I
| can say without a doubt I am much better off than any of them.
|
| How many now will never get into tech due to AI?
| Animats wrote:
| Big question: how to represent programs when some kind of AI-type
| system is doing most of the work. Code is all "what" and no
| "why". Intention is represented, if at all, in the comments. If
| you use an AI to modify code, it needs intention information to
| do its job. Where will that information come from?
| nopinsight wrote:
| Sam Altman: The nature of programming will change a lot. Maybe
| some people will program entirely in natural language.
|
| from min 1:29:55 -- Sam Altman & Lex Fridman's new interview,
| incl questions about AGI
|
| https://youtu.be/jvqFAi7vkBc?si=tZXNdVnOSk1iWX34
|
| I agree but there will likely be demand for experts to help
| direct AI to create a better solution than otherwise and even
| write pseudocode and math formulas sometimes. The key is for the
| expert to understand the user's needs better than AI by itself
| and ideally better than the user themselves.
|
| Many/most software engineers could act more like art directors
| rather than artists, or conductors instead of musicians.
| xyst wrote:
| Whole article is a bit rushed. Seems like a knee jerk reaction
| based off the tech demo of "Devin AI".
|
| I'll admit the tech demo was impressive on the face of it. But it
| had all the flags showing it was a well rehearsed demo (think
| Steve Jobs and original iPhone debut) or even simulated images
| and videos. For all we know the code was written well in advance
| and recorded by people then tech ceo Steve Job'd the shit out of
| that performance.
|
| I notice the ai is still in preview and locked behind gate
| cynicalsecurity wrote:
| There is one fundamental flaw with the AI generated code:
| unmaintainability and unpredictability.
|
| An attempt to modify the already generated specific piece of code
| or the program in general will produce an unexpected result.
| Saving some money on programmers but then losing millions in
| lawsuits or losing customers and eventually the whole business
| due to an expected behaviour of the app or a data leak might not
| be a good idea after all.
| lazide wrote:
| Retorts from the business side:
|
| * sounds like next quarters problem
|
| * since everyone is doing it, sounds like it will be society
| who has to figure out an answer, not me. (too big to fail)
|
| Not joking, I think those are the current defacto strategies
| being employed.
| withinboredom wrote:
| Really get them going when you mention that "too big to fail"
| is a logical fallacy.
| airstrike wrote:
| Playing devil's advocate, between compilers and tests, is it
| really less predictable than some junior developer writing the
| code?
|
| If you're pushing unreviewed, untested code to production,
| that's a bigger problem than the quality of the original code
| snoman wrote:
| Who reviews and tests the code?
|
| And how do they build the knowledge and skill needed to
| review and test without being practiced?
| bsder wrote:
| You're assuming that AI-based code is even that minimally good.
|
| Here's a nice litmus test: Can your AI take your code and make
| it comply with accessibility guidelines?
|
| That's a task that has relatively straightforward subtasks
| (make sure that labels are consistent on your widgets) yet is
| still painful (going through all of them to make sure they are
| correct is a nightmare). A _great_ job to throw at an AI.
|
| And, yet, throwing any of the current "AI" bots at that task
| would simply be laughable.
| majani wrote:
| Takes like this miss the forest for the trees. The overall
| point is that automated programming is now a target, just like
| automating assembly lines became a target back in the day.
| There will be kinks in the beginning, but once the target is
| set, there will be a huge incentive to work out the kinks to
| the point of near full automation
| uticus wrote:
| > Apart from what the AI model is capable of, we should ask think
| in terms of how accurate the solutions are. Initially these
| models were prone to hallucinations or you need to prompt them in
| specific ways to get what you want.
|
| Today's prompts are yesterday's intermediate languages &
| toolkits. Today's hallucinations are yesterday's compiler bugs.
| Nevermark wrote:
| I think a lot of issues with language model coding come down to
| three issues.
|
| The first is the models themselves:
|
| 1. A lack of longer context, i.e. white boarding or other means
| of breaking down problems into components, being able to focus
| context in and out, etc. This is a direction models are going to
| go.
|
| Just like us, they are going to benefit from organizational
| tools.
|
| The other two are just the need for normal feedback, like we
| developers get:
|
| 2. They need a code, run, evaluate, improve cycle. With a hard
| feedback cycle today's models do much better.
|
| They quickly respond to iterative manual feedback. Which is just
| an inefficient form of direct feedback.
|
| 3. A lack of multiple perspectives. Groups of models competing
| and critiquing each other on each task should improve results for
| the more tricky problems.
|
| --
|
| I personally think it is astounding that models generate somewhat
| reasonable, somewhat buggy code on their first pass.
|
| _Just like I do!_
|
| I don't know any coder that doesn't repeatedly tweak code they
| just wrote due to feedback. The fact that models can output first
| pass code so quickly without feedback today, suggests they are
| going to be very very good once they can test their own code, and
| converge on a solution informed by different attempts by
| different models.
|
| A group of models can also critique each others' code
| organization for simplicity, readability and maintainability.
| kaba0 wrote:
| The time my job as a software engineer will be obsoleted by AI is
| the day singularity happens, and then we will have much much
| bigger problems. Anything less is just so far from it, that it is
| a laughable excuse of a code generator.
| HarHarVeryFunny wrote:
| I think we need to have human-level AGI before "AI developers"
| are a possibility, and it'll probably take us a lot longer to get
| there than most people imagine.
|
| Remember, the job of a developer is not just (or even primarily)
| writing code. It's mostly about design and problem solving -
| coding is just the part after you've nailed down the
| requirements, figured the architecture and broken the
| implementation down into pieces that can be assigned (or done by
| yourself). Coding itself can be fun, but is a somewhat mindless
| task (esp. for a senior developer who can do this in their sleep)
| once you've got the task specified to this level where coding can
| actually begin. Once the various components are coded, they can
| be built and unit tested (may require custom scaffolding,
| depending on the project), debugged and fixed, and then you can
| integrate them (maybe now interacting with external systems,
| which may be a source of problems) and perform system test, and
| debug and fix those issues. These aren't necessarily all solo
| activities - usually there's communication with other team
| members, maybe people from external systems you're interacting
| with, etc.
|
| So, above process (which could be expanded quite a bit, but this
| gives a flavor) gets you version one of a new product. After this
| there will typically be bugs, and maybe performance issues, found
| by customers which need to be fixed (starting with figuring out
| which component(s) of the system are causing the issue), followed
| by regression tests to make sure you didn't inadvertently break
| anything else.
|
| Later on in the product cycle there are likely to be functional
| change requests and additions, which now need to be considered
| relative to the design you have in place. If you are
| smart/experienced you may have anticipated certain types of
| change or future enhancement when you made the original design,
| and the requested new features/changes will be easy to implement.
| At some point in the product's lifetime there will likely
| eventually be changes/features requested that are really outside
| the scope of flexibility you had designed in, and now you may
| have to refactor the design to accommodate these.
|
| As time goes by, it's likely that some of the libraries you used
| will have new versions released, or the operating system the
| product runs on will be updated, or your development tools
| (compiler, etc) will be updated, and things may break because of
| this, which you will have to investigate and fix. Maybe features
| of the libraries/etc you are using will become deprecated, and
| you will have to rewrite parts of the application to work around
| this.
|
| And, so it goes ...
|
| The point of all this is that coding is a small part, and
| basically the fun/easy part, of being a developer. For AI to
| actually replace a developer it would need to be able to do the
| entire job, not just the easy coding bit, and this involves a
| very diverse set of tasks and skills. I believe this requires
| human-level AGI.
|
| Without AGI the best you can hope for, is for some of the easier
| pieces of the process to be addressed by current LLM/AI tech -
| things such as coding, writing test cases, interpreting compiler
| error messages, maybe summarizing and answering questions about
| the code base, etc. All useful, but basically all just developer
| tools - not a replacement for the developer.
|
| So, yeah, one day we will have human-level AGI, and it should be
| able to do any job from developer to middle manager to CEO, but
| until that day arrives we'll just have smart tools to help us do
| the job.
|
| Personally, even as a developer, I look forward to when there is
| AGI capable of doing the full job, or at least the bulk of it. I
| have all sorts of ideas for side projects that I would like an
| AGI to help me with!
| hintymad wrote:
| > Though coding all day sounds very appealing, _most_ of software
| development time is spent on communicating with other people or
| other admin work instead of just writing code
|
| This sounds very...big corp, which inevitably needs many
| professional box drawers, expert negotiators, smooth
| communicators, miracle alignment workers, and etc. But guess
| what, if you are in a core group in a small company, you function
| as a grad student: you tackle hard problems, you spend time
| discussing insights, you derive theories, and you spend most of
| your time writing software, be it requirement gathering,
| designing, code writing, debugging, or documenting. But you
| definitely don't and shouldn't spend _most_ of your time talking
| to other teams.
| Aperocky wrote:
| The big corp eng might not write a lot of code, but their code
| might get _executed_ far more often.
| iteratethis wrote:
| In the late 90s I was in an introduction class to programming in
| C. I kept making memory allocation mistakes that crashed the
| machine.
|
| My mentor: "Don't worry, by the time you graduate, you don't have
| to program, the world will soon be modeling software".
|
| We'd go from 3G languages to 4G and beyond.
|
| That never happened. Three decades have passed and we still
| develop at an obscenely low abstraction level. If anything,
| complexity has increased.
|
| At the end of the day though, the point of computing is that the
| machine does what we want. Expressing this via a programming
| language where very expensive humans write text files is not
| necessarily to last forever.
|
| To me the much more interesting threat is regarding purpose. As
| AI becomes ever more capable, an increasing amount of things
| become pointless.
|
| If using AI I have the power of 50 programmers at my fingertips,
| how could one possibly develop anything sustainable and unique?
| Anybody else can trivially make the same thing.
|
| What would set something apart if effort no longer is a major
| factor? Creativity? Easy to just replicate/steal.
| CyberDildonics wrote:
| So even though there has been small incremental progress at
| most in the last 30 years and your mentor's predictions were
| wildly wrong about how much easier anything would become, you
| still think "AI" will give you the power of 50 programmers?
| beryilma wrote:
| Software Engineering as a professional discipline without any
| specialization (and writing CRUD applications or websites don't
| count in my opinion) never made sense to me. Software engineers
| producing web applications are modern day equivalent of
| production line workers who were mostly replaced by automation.
| So, AI will likely replace such software engineers.
|
| In contrast, automation and technology did not replace "real"
| engineers. If anything, it made their job more productive.
|
| All this to say that a generic software engineer with no
| specialized skills might be replaced by other professionals
| (engineers, accountants, etc.) who might be able to leverage AI
| to create production quality software.
| windowshopping wrote:
| Gonna go ahead and strong disagree. Dismissing a huge portion
| of engineers as "production line workers" whose work takes no
| thought or creativity is incredibly reductive and makes me
| wonder if you've ever tried doing their job.
|
| Web applications are a category of software with just as much
| variance in their complexity as anything else. Are mobile apps
| and desktop apps also trivial? What's the difference? And if
| none of mobile apps, desktop apps, or web apps require "real
| engineering," then what does? Presumably games are just
| "repetitious production-line-like usage of a game engine." So
| what's real? The Linux kernel? Are the only real engineers
| those writing never-before-seen C code?
|
| This reads to me like a self-serving humblebrag setting
| yourself aside as a _real_ engineer, not like those _other_
| engineers.
| beryilma wrote:
| You are assuming too much about what I think of myself. I am
| a software engineer and have been so for a long time. And
| I've written my fair share of UIs and apps.
|
| If all the things we do are so varied and creative, then we
| shouldn need to worry about AI replacing our jobs. But that
| is not what I am reading in this post. What software
| engineering jobs will AI replace then?
|
| In fact, what I wrote suggests that specialization is a way
| to make software engineering save itself from the AI threat
| by making the task of the software engineer more
| irreplaceable. No matter what you think about its
| complexities, making web sites is not it.
| windowshopping wrote:
| > If all the things we do are so varied and creative, then
| we shouldn need to worry about AI replacing our jobs.
|
| Your argument is built on this flawed premise. ChatGPT can
| generate poetry and stories. Does that mean writing poetry
| and stories is neither varied or creative?
|
| The fact that AI _can_ do something doesn 't diminish its
| value when it comes from the hands of a human.
|
| Machines in factories can make all sorts of household
| decorative goods too, but there's a reason people still buy
| hand-made things.
| beryilma wrote:
| > The fact that AI can do something doesn't diminish its
| value when it comes from the hands of a human.
|
| Sure. Morally it doesn't. Financially, it certainly does.
| This is essentially in the definition of automation.
|
| Turning software engineers into artisans will not save
| %95 of software engineers.
| phkahler wrote:
| I'm quite unconvinced. If anyone can get a LLM to fix the
| geometry kernel bugs in solvespace I'll quit engineering today.
| From what I've seen, just understanding the intent of the
| algorithms is infinitely beyond an LLM even if explained by a
| human. This is not going to change any time soon.
|
| Pull Requests accepted.
| xpl wrote:
| Why just "software development"? People really should think of
| the endgame. _Everything_ will change.
|
| I mean, what business, science, [popular] art, and even politics
| are all essentially? Just a clever optimization process against
| the real world. Throw ideas/hypotheses on the wall and see what
| sticks.
|
| Computers are very good at optimization! No reason to believe it
| won't be all solved in N years, with like 100x more efficient
| compute and a couple more cool math tricks?
|
| Give future "GPT-10" interfaces for interacting with the real
| world, and it could do everything -- no human
| supervision/"prompting" needed at all. Humans would only be an
| unneeded bottleneck in that optimization process.
|
| There will be _big_ companies consisting only of their founder
| and no one else. And I am not sure there will be a lot of such
| companies (so like "everyone could have one") -- it is more
| likely, due to economies of scale, that there will be only a few
| megacorps owning all the compute.
|
| What we should worry about is how to avoid an _extreme_ wealth
| disparity /centralization that seems imminent in that future...
___________________________________________________________________
(page generated 2024-03-18 23:00 UTC)