[HN Gopher] Build full "product skills" and you'll probably be fine
___________________________________________________________________
Build full "product skills" and you'll probably be fine
Author : nixcraft
Score : 844 points
Date : 2023-03-19 07:42 UTC (15 hours ago)
(HTM) web link (twitter.com)
(TXT) w3m dump (twitter.com)
| DanielBMarkham wrote:
| As a self-taught polymath, I did a lot of research many years ago
| on how good teams create good products. I reached some unusual
| conclusions at the time, but they've stood the test of time.
|
| The main one in regards to John's tweet is this: desired behavior
| has to "force" out code and architecture. Typically teams and
| individual developers carry a lot of presumptions and assumptions
| into their solution space. They probably pick this kind of thing
| up from reading HN, Twitter, and the like. We all do this.
|
| It's these "things you do but don't really think about" that are
| so pernicious in technology development. Guess what? Looks like
| AI is going to master that kind of thing since with each social
| media post we continue to train it on "How things are usually
| done"
|
| By taking an extreme "force me to write code" approach, you end
| up developing a wide range of skills: DevSecOps, analysis,
| database design, and so on. In fact, you don't really know what
| you're going to develop when you start working on it.
|
| That's the point. That's the job. Always has been.
| lucidlive wrote:
| Calling yourself a polymath is akin to calling your self a
| genius.
| meesterdude wrote:
| this is a bit exaggerated.
|
| Polymath > A polymath is an individual whose knowledge spans
| a substantial number of subjects, known to draw on complex
| bodies of knowledge to solve specific problems.
|
| If that's who you are, there's no shame in identifying as so.
| Don't let hacker news comments gatekeep you.
| latency-guy2 wrote:
| There's nothing wrong with being confident with your skills.
| Humbleness is only seen as a virtue because people have an
| ego and don't like to be reminded that they are unskilled.
| weatherlite wrote:
| Let me guess, not doing great socially?
| DanielBMarkham wrote:
| Perhaps so, however I did not call myself humble.
|
| Apologies. I overstated. How about "I study a lot of stuff"
|
| And now the reply, if I've accurately modeled it, is "So
| you've studied a lot of stuff, what makes you an expert on
| any of it?"
|
| Programming at its heart has always been cross-discipline, so
| there's really no judgment of ability aside from solving
| problems involving those disciplines you're working for. I
| find myself with a lack of adjectives that will keep our
| conversation short.
| CuriouslyC wrote:
| For future reference, you can just use "autodidact."
|
| Next time if you want to brag without looking like an ass,
| describe something really impressive you did in detail to
| illustrate a point using highly understated language. Not
| only will you not get penalized for bragging, people will
| give you bonus points for being humble.
| jt2190 wrote:
| Can you clarify? Are the "things you do but don't really think
| about" _good_ , like good habits that have become ingrained, or
| _bad_ because they've become mindless rote that are often
| unnecessary action?
|
| ("Force me to write code" suggests that we're looking to think
| deeply about what we're doing and trying to minimize
| unnecessary action, so the rote actions are _bad_.)
| rozenmd wrote:
| Reminds me of patio11's classic "don't call yourself a
| programmer": https://www.kalzumeus.com/2011/10/28/dont-call-
| yourself-a-pr...
| heap_perms wrote:
| I've read the whole thing. Frankly, this was quite a
| depressing. Something about this Economic Reductionistic way of
| thinking puts me off. Even tough I agree with a lot of his
| points -- for example, the importance of communication over
| anything else -- it just seems like a very one-sided worldview.
| For example, he constantly outlines aspects 'business value',
| but not one sentence on the inherent, intrinsic joy of creating
| and building something (software in this case). And I think you
| can't just ignore that part.
|
| As someone in this post has put it quite beautifully: > It's
| like telling a musician to become a DJ because the point of
| performing is to entertain people.
| ordu wrote:
| _> Software is just a tool to help accomplish something for
| people -- many programmers never understood that_
|
| Yeah. It seems to me that some live in a kind of a platonic
| world, where programming is a tool to produce ideal entities,
| like math is.
| PeterStuer wrote:
| My fear is that people that are not competent enough to judge ai
| generated content will use it, intentionally or unintentionally
| as a sort of denial of service attack on expertise. Middle
| management churning out some half baked code solution in 5 mins
| of copy pasting from the spec which will take you hours of
| investigating for finding the pitfalls and 5 meetings to get the
| manager to conceed how she did not provide 95% of 'the solution'
| while dodging a barrage of slight adaptations 'that should solve
| your remarks' and being deriled for 'negativity'.
|
| I love GPT4, but I hate what it will do in business environments
| BiteCode_dev wrote:
| Robots didn't make car assemblers obsolete. But it did reduce the
| number of workers needed, and raised the qualifications you
| needed to have to work on assembling cars.
| nfRfqX5n wrote:
| Curious about how licensing will play out.
|
| What's stopping OpenAI from claiming copyright on everything
| produced in the last X years?
|
| Will every company need to be running their own GPT4 model to be
| safe from this?
| _-____-_ wrote:
| The early indication from some government agencies is that AI
| output will not be copyrightable.
| osigurdson wrote:
| CS / tech has always been an unstable career choice and I expect
| this will continue. You might make $500K for portions of your
| career or you might end up making $50K or less - hard to say.
| People should not go into the field unless passionate about it.
| nailer wrote:
| Title is misleading.
| Radim wrote:
| Yes, human economy is about the exchange of value between humans
| - a cheap tautology from Carmack. Money is indeed irrelevant to a
| snail, as opposed to a would-be SW engineer.
|
| But Nature is about making better use of energy gradients, always
| doing more with less, the principle of least action. Using any
| surplus to do it _again_ (i.e. evolved life). That 's the
| properly grounded perspective. In that sense _" Get skills to
| satisfy humans and you'll probably be fine!"_ sounds super
| myopic.
|
| The anxiety we humans feel when confronted with AI is not only
| that we'll be out of our job as a programmer, or doctor, or
| driver, or teacher, or whatever.
|
| It's the broader sense of unease that humanity's gradient-razing
| days, spectacular as they were all the way to nuclear fission and
| fusion, may soon be over. And "economy" as a useful tool
| advancing that Nature's mission will have evolved beyond us.
|
| "Making humans satisfied" is not terribly relevant from that
| perspective. Vast swathes of the human economy are just
| scaffolding to support the rest: humans reproducing to keep the
| optimization machine going. The overhead is tremendous. Once
| Nature finds a way to do more with less, I have zero doubt much
| of that scaffolding will be optimized away. That's some
| definition of "fine".
|
| Or maybe I misunderstood and Carmack is merely suggesting
| individuals try to adapt and hope for the best. What else can you
| do anyway? That would be the honest answer. Rather than
| bloviating about "Guide AI with your product skills to deliver
| value to humans" - an embarrassing category error.
| z7 wrote:
| Interesting critical response currently buried at the bottom of
| page 2. Doesn't surprise me really. Frankly this thread is an
| embarrassment. And not because of whatever someone's conclusion
| is but because of a noted lack of critical reasoning in this
| case.
| jimkoen wrote:
| > It's the broader sense of unease that humanity's gradient-
| razing days, [...], may soon be over.
|
| No, the sense of unease comes from people fearing for their
| livelihood, given that it looks like that their raison d'etre
| is about to be revoked by some AI automating their career away.
|
| A point you so aptly summarized with: > Vast swathes of the
| human economy are just scaffolding to support the rest: humans
| reproducing to keep the optimization machine going.
|
| The unease is, imo, only so palpable however, because every
| stakeholder that stands from profiting from such a development
| - i.e. capital owners - refuse to describe it as what the
| development really is.
|
| Publicly advertising AI as "we're basicially moonshotting the
| largest job automation operation ever attempted, so 20-30% of
| you will be out of a job in the next 15 years" would probably
| cause unrest - and rightly so!
|
| > What else can you do anyway?
|
| Communism - and ya hate to see it (as an american)!
|
| The means of production becoming dead-simple to use, that's a
| prelude for the proletariat being able to seize them.
|
| On a more serious note, you can always protest, and escalate,
| which I'm hoping will happen.
| Radim wrote:
| > _the sense of unease comes from people fearing for their
| livelihood, given that it looks like that their raison d
| 'etre is about to be revoked by some AI automating their
| career away_
|
| You're on the same boat with Carmack if that's where you find
| the danger. Then his advice applies.
|
| People have had their livelihood threatened since forever -
| that is _not_ a new state of affairs. That anxiety we 're
| well equipped to deal with, hard (and genocidal) as it
| sometimes is.
|
| Just another gold rush for resources, and for status, and for
| reproductive success. A new caste of winners. Same old.
|
| > _so 20-30% of you will be out of a job in the next 15
| years_
|
| My point was that jobs (and money, and market...) are a
| machine that serves human ends. The end consumer is always a
| human, that's our economy's anchor point.
|
| Whatever energy is put to work is ultimately in service of
| someone getting laid or fed, or their desires/ideals
| (~firings inside their brains).
|
| As long as that's true, "keep your eyes on delivering value
| to humans" is a safe bet - just like Carmack says.
|
| Mine was a simple thermodynamical note on Nature (e.g. via
| one of its creations, humans) finding a more efficient way to
| bulldoze energy gradients, the true currency of the universe,
| thus unanchoring our "jobs and careers" system. By cutting
| out the very expensive middle _man_.
|
| ---------
|
| Your remark on communism is also interesting in that sense. I
| see a parallel in that the concept of dissolution of
| individual into a commune, has been introduced several times
| in large societies, each time resulting in millions of deaths
| and general devastation. Yet it keeps bouncing back - some
| very strong attraction basin lurking there, evolutionarily
| speaking. Not to be trivialized by snarky political one-
| liners.
| jordanpg wrote:
| > Keep your eyes on the delivered value, and don't over focus on
| the specifics of the tools.
|
| Sure, but the aspects of the job that some people enjoy may be
| closely linked to the tools. If software development becomes less
| about coding and more about creating prompts, test suites, or
| specifications, then some may lose interest in the work.
|
| At least for me, it was never really about delivering value. If I
| am honest, I was completely indifferent about some of the
| industries I worked in. It was always just about solving
| interesting technical problems, learning stuff, keeping my brain
| active.
|
| It's easy for me to imagine that software development may someday
| become the province of people who are more like designers.
| bqrayx wrote:
| Programmers can starve AI code generation tools easily by moving
| to a new language and never producing open source, so the AI
| cannot steal and launder their output.
|
| Perhaps this is Microsoft's new anti-OSS strategy, the ultimate
| EEE.
| nico wrote:
| When I was a little kid I asked my dad, an engineer, to teach me
| computer programming.
|
| He refused saying that when I grew up programmers would be
| unnecessary because "anyone would be able to program",
| essentially the interfaces would be so easy/advanced that there
| wouldn't be a need for programmers.
|
| As a kid I never really understood his point. When I finally
| understood, I dismissed it as extreme.
|
| Now I'm realizing my dad was right. Not sure when it's going to
| happen, but it feels that very soon.
| aws_ls wrote:
| Just curious, so do you know how to program or not?
| nico wrote:
| Hahah, of course I went against my dad's advice. No I'm
| wondering if it was the right move.
| RivieraKid wrote:
| Here's my current thinking on the impact of GPT-4 on the
| developer job market:
|
| - I expect developer productivity to go up 1.5x - 15x over the
| next several years assuming GTP-4 based tooling is integrated
| into IDEs.
|
| - There will be two opposing forces acting on developer wages.
| First, developers will be more productive, therefore the price of
| one hour of work should go up. But - the supply of developer
| output will increase as well, which would push price per "line of
| code" or per "unit of developer output" down. So the big unknown
| is the demand curve.
|
| - There will be a temporary boost in demand for developer work
| connected with the transition to this new technology. Big
| corporations will want to upgrade their systems to automate
| consumer support, startups will make new tools for AI-generated
| graphics, etc.
|
| - We can also study the effects of technology-driven increases in
| worker productivity by looking into history. Developer
| productivity has always been going up - thanks to
| stackoverflow.com, better languages, better IDEs, more and better
| libraries, etc. There's also a greater supply of developers (e.g.
| India). Didn't change the job market too much. One should not
| draw strong conclusions from this though, it's a very superficial
| analysis. On the other hand, people working in agriculture have
| become much more effective, which lead to much fewer people
| working in agriculture, maybe because people need to eat as much
| calories per day as they did 100 years ago.
|
| - My base case, based on the assumption that GPT-4 will not
| improve dramatically, is that developer wages will stay roughly
| constant. But there's a lot of uncertainty in this conclusion and
| in the assumption.
| coffeebeqn wrote:
| > up 1.5x - 15x over
|
| I wish! I still often find out my team members don't know basic
| text/Unix tools. Heck one guy took our list of enums in a JSON
| file and wrote on his keyboard by hand each of them into a
| class. It took me maybe a minute in Sublime Text when he wasted
| at least a day
| austin-cheney wrote:
| I completely disagree because of how this batch of AI learns.
| It learns by studying what is currently available as opposed to
| supplying something new.
|
| Bad developers on the way of becoming obsolete by AI will see
| drastic improvements from the thing that's about to replace
| them. Otherwise it's slightly better intellisence integrated in
| your editor.
|
| The difference is that bad developers are primarily concerned
| with literacy and their primary intent is chasing _easy_.
| Better developers are comfortable reading code no differently
| than an attorney reading a large contact and their primary
| intent is chasing better automation. It's the difference
| between designing a car engine versus hammering pieces
| together.
|
| I suspect this will open wage distinctions. Those who can write
| software will be identified from those who can't. Those who can
| will be fewer in number and their wages will increase. Those
| who can't will be a lower skilled commodity with depreciated
| wages, like the way data entry was once a skilled profession
| many decades ago but no longer.
| eggsmediumrare wrote:
| I think a lot of "good" developers making this very argument
| will be in for a nasty surprise when they find out which
| bucket they're actually in
| cableshaft wrote:
| > On the other hand, people working in agriculture have become
| much more effective, which lead to much fewer people working in
| agriculture, maybe because people need to eat as much calories
| per day as they did 100 years ago.
|
| Except population exploded also, and the only reason we could
| support such a population has been thanks to advances in
| agriculture (it could be argued that's not a good thing, as the
| larger population is helping fuck our planet up in other ways,
| but that's for a different thread). So there has been an
| increased demand for food.
|
| However, there's only so much arable land on this planet. The
| planet is finite in size. So when basically the entire planet
| is already being farmed, you can't really add more farmers, so
| further efficiency is just leading to less farmers.
|
| Software is not so finite. It technically is, as we only have
| so many servers or whatever, but we're nowhere near the upper
| limits of what all we can handle or the appetite for software
| that companies have (which seems about infinite). Additionally,
| we have a habit of rebuilding and replacing software every few
| years, so even if we reach capacity limits, there's still
| demand for new software to replace the old, or working on new
| features for existing software.
|
| So it's a bit different of a situation and not really
| comparable.
| arwhatever wrote:
| Also, much like that law of bureaucracy stating how work
| expands to fill the time available, every employer I've
| worked for has been able to envision and request vastly more
| software functionality than any dev team has been able to
| produce, which might not extrapolate forever, but has
| sustained the field so far.
|
| On the other hand, people are much more finite in the amount
| of food that they require
| danaris wrote:
| > assuming GTP-4 based tooling is integrated into IDEs.
|
| And how, exactly, is this going to work?
|
| Is every IDE going to pay some license fee to OpenAI? Will that
| be up to the companies or--even worse--individuals who use the
| IDEs?
|
| What happens when OpenAI decides that GPT-4 is passe and it's
| time to move on to something else? Will the existing IDEs that
| are designed to interact with GPT-4 stop working?
|
| Will GPT-5 or whatever provide the same kinds of assistance, or
| will there be regressions? Will they require a completely
| different license for it?
|
| See, these are some of the important questions that come with
| assuming that a fairly new for-profit company will act as the
| backbone for your marvelous programming renaissance--
| particularly when the service in question is one that takes
| quite a lot of resources to continue to operate.
|
| Personally, I would be _interested_ in some kind of LLM-
| assisted IDE, but no way in hell am I going to make any
| significant portion of my job dependent on an organization that
| decided, as soon as it was clear there was real interest in
| what they were doing, that they 'd just toss the "Open" part of
| their name and principles out the window and just go for
| closed-source proprietary product and maximum hype for more
| profit.
| habibur wrote:
| > - I expect developer productivity to go up 1.5x - 15x over
| the next several years assuming GTP-4 based tooling is
| integrated into IDEs.
|
| Hardly. 10% of the time spent is in writing code and the other
| 90% goes behind debugging and fixing things.
|
| Even if AI shrinks the time spent in that 10% in writing by
| another 90%, you still have have to take full time for
| maintaining that code. Changing, debugging, testing, deploying,
| profiling, log analysis.
|
| Worse, its now someone else's code you need to understand and
| debug, and not the one written by yourself.
| [deleted]
| RivieraKid wrote:
| It's not just writing code, GPT and GPT-based tools can help
| with fixing bugs, solving configuration problems,
| understanding existing code, suggesting variable names, etc.
| kristofferR wrote:
| ChatGPT is better at that 90% than the 10%.
| mdorazio wrote:
| I think you have two common misconceptions here.
|
| 1) Wages have very little to do with value/productivity in a
| free market. They are almost entirely determined by supply and
| demand. Value simply places a ceiling on compensation. Thus, if
| far more people can perform a "programming" job because of
| GPT-X, unless the demand for those jobs rises significantly the
| net result will be wage reduction.
|
| 2) There's this weird thinking on HN that since a developer's
| job involves [insert your % estimate of time spent actually
| coding/bug fixing] and the rest is figuring out requirements,
| dealing with weird requests, planning, etc. that means
| developers can't be replaced. However, I don't see a whole lot
| of discussion around what the difference is between a developer
| and a competent business analyst in a GPT-X world. The latter
| can be had for significantly less money, requires less
| training, and if the actual programming part is largely
| automated away or concentrated in the hands of fewer "real"
| developers, those roles start to look awfully similar.
| RivieraKid wrote:
| 1) Economic theory says that marginal product of labor
| (value) is equal to wages, at least in simple models.
|
| 2) With GPT-4 you still need to know how to program. A
| product manager can't replace you.
| mdorazio wrote:
| 1) The real world is not a simple economic model. The wage
| rate is roughly equivalent to the rate it costs to replace
| an employee, not their marginal value. If your argument was
| true, company profits would tend toward zero as wages rise.
|
| 2) I specifically did not say GPT-4. If you think v4 is the
| peak of what will be possible when looking at how far we
| have come in just _2 years_ then I don 't know what to tell
| you. Also, a product manager is not a business analyst.
| RivieraKid wrote:
| 1) I think it's approximately true in the real world. If
| you can hire an employee who adds substantially more
| revenue than what their wage is, you keep hiring
| employees until value created by an employee is close to
| their wage.
|
| When discussing economy, it's good to start with
| understanding the situation through the lens of economic
| models and than look to what extent are the conclusions
| applicable to the real world.
|
| Using your argumentation - above you said that xyz is
| true in a free market, but real-world markets are not
| free...
|
| 2) And I specifically said that I'm assuming no dramatic
| improvement beyond GPT-4. The 2 misconception I
| supposedly have... I didn't even make that claim.
| silvestrov wrote:
| > competent business analyst
|
| with the danger of invoking the "No True Scotsman" fallacy,
| I'd say that competent business analyst are even more
| difficult to get hold of than a competent programmer.
|
| I've had so few managers that were competent at managing
| people and projects.
| robjan wrote:
| Business Analysts aren't usually (project) managers
| CuriouslyC wrote:
| The developer can prompt for a solution with specific
| storage/performance requirements by specifying an algorithm,
| and specific scalability requirements using by specifying an
| architecture. Imagine a business analyst prompting for an
| app, and getting a ruby on rails monolith with O(N^2)
| performance for the core behavior for a service that is
| expected to have millions of requests daily.
| 88913527 wrote:
| The follow-on prompts would be to refactor the existing
| system to solve the scalability issues. You'd need to be
| able to feed in the existing codebase for that, though.
| _fat_santa wrote:
| > Imagine a business analyst prompting for an app, and
| getting a ruby on rails monolith with O(N^2) performance
| for the core behavior for a service that is expected to
| have millions of requests daily.
|
| I see this as the main argument against "we will just have
| tools that allow managers and ba's to do what devs do now".
| I think folks often forget that there are two sets of
| requirements for every app: business requirements and
| technical requirements. Non technical folks might
| understand the business requirements very well and may even
| be able to write code that satisfies those requirements,
| but the real value in a dev is squaring those business
| requirements with technical ones. A BA might look at a
| DynamoDB table and say "yeah lets just run a scan for the
| items we need", whereas a dev will look at the same problem
| and say "yeah we can do that but it will cause issue A,
| issue B and sometimes issue C". And the dev knowing those
| gotchas is why you have them there in the first place, a
| dedicated person that knows all these gotchas and makes
| sure you organization avoids a footgun in prod.
| amelius wrote:
| It's not that more people can do programming with GPT-X
| around, because the AI will only solve the problems that have
| already been solved thousands of time in the past in slightly
| different ways. What GPT-X cannot do is left to real CS
| people. So instead of coding CRUD systems, we can do real
| algorithms research again, except fewer people are capable of
| it.
| roflyear wrote:
| Supply generally doesn't decrease cost fyi
| phendrenad2 wrote:
| Moore's Law is dead and AI is its zombie. Best to just ignore it
| and spend your time making things.
| jongjong wrote:
| If anyone wants to avoid wasting their software development
| career. DO NOT EVER work on developer tools as the product.
| Developer tools is one of these areas where it doesn't matter how
| good your product is, no matter how much developers say they like
| it or how much time it saves them, it's not going to make it. Big
| tech companies will not allow their employees to use the tool and
| it will be a commercial failure. It will be a failure no matter
| what... Ok, unless maybe you can raise a ton of funding from well
| known VCs who will foist your tool onto various companies they
| have connections with... But then quality of the tool doesn't
| really matter at all.
|
| Otherwise, even if it's the best tool ever built for certain use
| cases, company directors won't have the deep tech knowledge to
| understand the nuances which make it so useful. As for rank-and
| file developers who are meant to use the tool; they are more
| interested in over-engineered, very complex tools which maximize
| billable hours than in tools which makes them more efficient in
| their job.
|
| In other words, the only people who could possibly want your
| product won't understand your pitch and those who can understand
| your pitch won't like it because it doesn't align with current
| perverse industry incentives for their roles.
|
| Some developers consciously reject any tool which would make
| their jobs easier, others reject them due to a subconscious bias
| in favor of tools which facilitate complexity, disagreements and
| long meetings.
| [deleted]
| ksec wrote:
| >Software is just a tool to help accomplish something for people
| - many programmers never understood that. Keep your eyes on the
| delivered value, and don't over focus on the specifics of the
| tools. - John Carmack
|
| The same as it was in the 80s or 90s, some 30 years later Tech
| industry hasn't changed. It may have technologically advanced,
| but in many cases I think the UX, tools and product decisions has
| actually regressed.
|
| The divide between a product genius and actual programmers has
| never been greater. At least Steve Jobs used to understand this
| better than anyone else.
|
| >And, one of the things I've always found is that you've got to
| start with the customer experience and work backwards for the
| technology. You can't start with the technology and try to figure
| out where you're going to try to sell it. And I made this mistake
| probably more than anybody else in this room. - Steve Jobs.
| fsloth wrote:
| I can't wait to outsource most of the gruntwork I need to do to
| ChatGPT. Last week I had it write me a poisson disk sampler over
| the surface of a triangle mesh with C# - and it was 100% correct.
| Ofc not perfect in the details but a perfect sample solution and
| scaffolding for final code.
| linsomniac wrote:
| 30 years ago I got tired of writing the same code over and over,
| so I went looking for something to cut out the repetition. Around
| a decade later I got tired of rewriting libraries or programs to
| make them exactly fit my needs. I used to really enjoy just
| coding for the sake of coding. But I started to value my time
| much more. I called this phase "losing my immortality".
|
| Any code that ChatGPT can write, I don't want to write. I'm ok
| with it taking my job. If I can work in higher level constructs
| and get more done, I'm all over that.
|
| Last week a coworker need some Python code to figure out how far
| through a day it currently is. I started thinking of the
| different ways to approach it (strftime, maybe time_t modulo
| 86400, twiddling datetime objects). Before I got very far I
| decided to ask chatGPT to write it, eyeballed the response
| (twiddling datetime objects), ran it and verified the response,
| and called it good. I should have asked it to write some tests
| for that code while I was at it.
|
| I'm now trying to teach my son, an avid programmer, how to work
| with chatGPT. He's 13, so he's got a lot of opportunity to really
| integrate it into his career, but he also has a lot of blind
| spots where chatGPT can really lead him down some blind alleys
| and knife him.
| carapace wrote:
| This. It's pretty obvious that any software that _can_ be
| written by machine _should_ be. (It 's almost tautological,
| eh?) Otherwise you're doing what David Graeber famously called
| "Bullshit Jobs": pointless work that only serves to establish
| your position in the economic structure.
|
| The immediate issues are: who decides the boundaries of
| (publicly) acceptable thought -and- who profits?
|
| I think you touch on the deeper and even more challenging
| issue:
|
| > I'm now trying to teach my son, an avid programmer, how to
| work with chatGPT. He's 13, so he's got a lot of opportunity to
| really integrate it into his career, but he also has a lot of
| blind spots where chatGPT can really lead him down some blind
| alleys and knife him.
|
| When the talking computers can answer any (allowed) question,
| the educational challenge becomes building human minds that can
| recognize and select the right options from the unbounded
| "white hole" of content that the machines can produce.
|
| Now, the word "right" in the previous sentence is doing a
| hellofa lot of work. Deciding the _right_ course of action is
| an open-ended intelligence test. It 's evolution, eh?
| lumb63 wrote:
| "Can answer" is doing a lot of work, too. I can write you a
| program that "can answer" any question you throw at it. I can
| assure you the answer won't be correct in most cases. This is
| a hyperbole for ChatGPT, but my point is that designing a
| system that can answer correctly, rather than can answer, is
| a far more worthwhile effort than retooling humans to be lie
| detectors, IMO.
|
| We can see from how hard it is to find consensus on trivial
| topics (are eggs good for you?) amongst experts who study
| these matters day after day. And the public, who does not, is
| left confused. A great deal of the problems we face are too
| complex for the human mind to be able to decide which of
| several reasonable-sounding options is correct. This is
| especially true when there is no rationalization for answers
| given.
|
| Look at our media system. How many people on both sides are
| stuck in "other side bad" simply because the talking heads on
| television networks tell them that, or someone on the
| internet wrote something. A lot of the content that drives
| them to conclusions isn't even true. We are very poorly
| equipped to be vetting answers from a fallible machine.
|
| Maybe I'm too old school, but I'd rather we learn how to
| solve the more complex problems so that we can do it and
| understand it rather than take guesses at which of a number
| of generated options is correct.
| carapace wrote:
| > designing a system that can answer correctly
|
| I think the solution is obvious: connect the computers to
| empirical feedback devices and make them scientists. Humans
| hallucinate too if they spend too much time in a sensory
| deprivation tank. Give the computers sensory inputs and
| they will naturally become scientists.
|
| > find consensus on trivial topics (are eggs good for you?)
| amongst experts who study these matters day after day
|
| Leaving aside the question of how reliable current soft
| science really are, this is exactly the problem that these
| machines can help with, once they have the data. Consider
| the modified form of the question: Are eggs good for _me_?
| At this stage in my life? At this time of day? Given what I
| ate for dinner last night? And millions of intricate
| details about my medical history, and my family, and DNA,
| and "exposome", etc. (I worked very briefly for a medical
| sensing startup, our chief Doctor would wax lyrical about
| the possibilities for personalized medicine and nutrition--
| once we have the data.)
|
| > the public, who does not [study these matters], is left
| confused
|
| To some extent, being able to do things without
| understanding how they are carried out "under the hood" is
| a measure of the advancement of civilization, eh? (But
| please don't mistake me as arguing in favor of ignorance!)
|
| > when there is no rationalization for answers given.
|
| We can ask the computers to show their reasoning (soon, if
| not now), we can ask them to summarize the current state of
| knowledge, including the grey areas and disputes, eh?
|
| > How many people ... are stuck ...
|
| I think it's clear that these machines will rapidly become
| perfect salesmen and perfect therapists. It's less clear to
| me what we will do with that.
|
| > We are very poorly equipped to be vetting answers from a
| fallible machine.
|
| I don't think the machines will be fallible once we connect
| them to sensory systems, but I do think that lots of people
| will try to do silly "Dr. Evil" things like try to mass-
| program their subjects/citizens. And I think lots of people
| will let them raise their children, that will probably have
| mixed results.
|
| > I'd rather we learn how to solve the more complex
| problems so that we can do it and understand it rather than
| take guesses at which of a number of generated options is
| correct.
|
| Selecting from the options the computer generates is the
| only complex problem left, see "Augmenting Human Intellect:
| A Conceptual Framework" SRI Summary Report AFOSR-3223 by
| Douglas C. Engelbart, October 1962
| https://dougengelbart.org/pubs/augment-3906.html
| tailspin2019 wrote:
| > It's pretty obvious that any software that can be written
| by machine should be.
|
| This is such a good succinct, and I think objective,
| statement. It strikes me as a great lense through which to
| look at recent developments.
|
| It does not imply that "all software should be written by a
| machine". But simply that anything that a machine can do just
| as well as a human, should probably be done by a machine
| rather than a human. And all additional _value_ on top of
| that machine-produced material will continue to come from
| things that only humans can uniquely do well. And of course,
| there will continue to be many things in that latter
| category.
|
| For many of us in this field, this means less busywork and
| more time spent working at higher levels of abstraction. More
| time curating, crafting, integrating, strategising. Less time
| working on the nuts and bolts.
|
| And for those who _love_ to hand-craft the nuts and bolts...
| I think that opportunity will still be there too. Because
| handcrafting will become a rarer skill, and there will still
| be areas where it is the correct approach.
|
| Those of us who used to handcraft nuts and bolts but now
| delegate this to the machines, will still benefit from our
| understanding of how the nuts and bolts used to be made.
|
| And those who grow up having never made a nut or bolt -
| perhaps not even knowing what nuts and bolts even are, will
| benefit from starting their path much earlier on in life
| working at a higher level of abstraction.
| gavinray wrote:
| You might also consider buying your son a Github Copilot
| license, it's $10/mo and I would easily pay $50/mo for the
| value it provides.
| jghn wrote:
| How does it compare to ChatGPT in terms of code quality?
| flir wrote:
| I've got both running, and I find GPT4 more useful. Unless
| I'm using it wrong, Copilot is "better autocomplete" and
| saves some typing. GPT4 can help you reason about code, and
| has helpful suggestions about debugging. I'm probably going
| to bin Copilot.
| bm-rf wrote:
| Github Copilot uses a different model than chatgpt, with a
| tokenizer more fit for code rather than plain english
| language.
| rabuse wrote:
| It's a substantial improvement when actually coding on the
| fly, since copilot will also analyze code in your codebase.
| It just makes coding the BS parts less tedious.
| lordnacho wrote:
| I think you get copilot because you don't end up having to
| cut and paste stuff into a browser.
|
| Have to say I'm very happy with copilot, it's definitely
| worth the 10 bucks a month.
| weatherlite wrote:
| How many people doing this job do we need compared to the tens
| of millions of people building programs by hand we have today?
| williamcotton wrote:
| How much better would software be if a team of five
| individuals could produce the same output as a team of 500
| individuals?
|
| How much terrible software is out there? How much terrible
| FOSS software is out there?
|
| How much amazing and humanistic software could be created if
| the costs for production were drastically reduced and the
| demands of the market were less of a factor?
| AnimalMuppet wrote:
| If we had a specific quantity of software that was needed,
| that might be a good argument. But to me, it looks like the
| quantity of software we want is unlimited. (Or at least, the
| quantity of stuff that we want software to _do_.) To the
| degree that GPT enables the same software to be written with
| fewer programmers, to that degree we 'll write more software,
| not have fewer programmers.
| MartijnHols wrote:
| How many companies will want new or bigger apps that they can
| finally afford if it requires fewer developers?
| weatherlite wrote:
| It's possible everything will speed up, and since the
| competitors are also speeding up some kinds of arms race on
| steroids will take place where not only are we all much
| more productive but we are also not cutting back on
| workers. I find it hard to believe but I concur it is
| possible.
| marcosdumay wrote:
| If you had an AI capable of writing the code you mean to
| without errors, the demand for software developers would be
| even larger than what it is today.
|
| Deciding what problem to solve and how to solve it is hard
| enough to sustain more than the current population of
| developers. (But if you had an AI capable of deciding those,
| then yeah, that would be problem.)
|
| Anyway, the current crop of highly hyped AIs are obviously
| structurally unable to do either. And it's not clear at all
| how far any AI is from solving any those (what usually means
| we are many decades away).
| adnmcq999 wrote:
| well that's weirdly graphic
| throwaway4aday wrote:
| A long tradition, see footgun.
| linsomniac wrote:
| You are referring to "lead him down a blind alley and knife
| him".
|
| ChatGPT has answered some specific code questions with code
| that is not only wrong, but horribly wrong. Things like
| presenting anti-patterns as the answer, halucinating APIs
| that don't exist and suggesting code using them, or code that
| is subtly wrong. These sorts of things go well beyond leading
| you down a dark alley, hence my addition.
| smallest-number wrote:
| I've always thought computer science was the closest thing the
| real world had to magic, because the essense of software is
| always automation - you write the spell, so later you just have
| to invoke it and magic happens.
|
| Whether the actual spell is written in arcane runes or python or
| encoded as a language model doesn't matter, the essense is the
| same.
| 99miles wrote:
| So many people focus on their "stack", and all these things that
| have little effect on the outcome. Customers don't know or care
| how something is built, they just want it to provide value and
| solve a problem.
| iamacyborg wrote:
| This is a general truism. Focus on the why, not the what or the
| how
| worrycue wrote:
| I feel when we have truly intelligent machines, programming jobs
| will be gone. But AI's like ChatGPT aren't there yet. It's just
| good at faking it - until it isn't and fails silently.
|
| Maybe it's the lack of data - it's difficult to model the world
| accurately with just words. Maybe it's an architectural
| limitation that no amount of data can fix and we need new better
| algorithms.
|
| Either way, given the state of its current output I don't think
| it's there yet.
|
| Should AI actually reach such a level ... I think everyone will
| be out of a job. Accountants, engineers, lawyers, even doctors
| will take a haircut. Programmers will just be a drop in the ocean
| of the jobless.
| fnord77 wrote:
| there was a point in time when being an average musician was a
| viable middle class career
|
| technology decimated that
| MarkusWandel wrote:
| The computers are coming for jobs ever up the white collar scale.
| When I started working there were (barely) still secretaries who
| typed and filed things for their bosses, and quite a few geeks
| had jobs that involved assembly language programming.
|
| AI will take jobs. Super frustratingly, it'll probably make call
| centers even more useless (has anyone ever gotten anything useful
| out of one of those ("Hi, I'm ____. Can I help you?" popups at
| the bottom right of web sites?) And it'll certainly automate some
| of the "copy/paste" type programming jobs at the lower end of the
| scale, the same way email automated a lot of secretarial jobs;
| i.e. 10 fulltime copy/paste programming type jobs may become a
| job for one human and an AI assistant.
|
| Which leaves people who are really passionate about and good at
| their craft. Somewhat relatedly, I saw about an uptick of people
| going into the trades. AI won't take plumber or electrician jobs
| away in the foreseeable future.
| robinsonb5 wrote:
| I think we've already seen the dividing line drawn between
| people who can ultimately be replaced by a computer (or at
| least, those whom top-tier management believe can be replaced
| by a computer) and people who won't: Work from Home.
| acatnamedjoe wrote:
| I see the plumber/electrician thing a lot.
|
| But talking to my friends who do these jobs it always seems
| like it would be even more vulnerable to AI than programming.
|
| Experienced electricians get paid decent wages because they
| have had lots of training and then have seen loads of different
| problems. So they intuitively know things like 'This is a 1960s
| house so if there's a problem with the lighting the first thing
| I should check is the fuse box connector, it should look like
| xyz, etc. Etc.'. This seems like exactly the sort of thing an
| LLM could do for them.
|
| I think you could easily see a world where an electrician is
| someone on minimum wage with very minimal training who blindly
| connects wires as instructed by an AI.
|
| I reckon the safest jobs are ones with limited reliance on
| knowledge and a very high level of physical skill (in
| environments where it's hard to get machines to operate).
| Bricklayers, plasterers and painter/decorators will be the big
| earners of the 2050s!
| abraae wrote:
| The most recent electrician jobs we've had done were:
|
| - fitting a timer into the switchboard to control the hot
| water cylinder. A simple job, but the sparky also had to talk
| to me (the client) to get us both on the same page.
|
| - fitting an EV fast charger in the garage. Not much science,
| but a lot of cable running and clipping down, then the garage
| switchboard needed to be swapped out for a larger one that
| could take the required RCD. And convincing me which brand
| charger to go for. 2 guys working together for a couople of
| hours.
|
| - fixing the range hood light (always on due to a broken
| switch). He spent quite some time trying to extract the
| broken switch, with the range hood balanced on his shoulder
| and wires everywhere.
|
| In every case there was no real complexity to the job, not
| the sort of thing that an AI could have been helpful at at
| all. Just a lot of common sense, knowledge of the regulations
| and much skilled manual work.
|
| I don't think AI is coming for electricians any time soon.
| acatnamedjoe wrote:
| But in all of those cases presumably someone needed to
| figure out what needed doing? (In your case maybe you're
| savvy enough that you knew what the issue was and just
| needed a certified person to do the work, but most clients
| won't be).
|
| My argument is that it is the 'figuring out' that drives
| electricians wages, not really the doing part. Because
| while clipping down cables and extracting switches is
| fiddly work, I'd argue it isn't a skill with enough barrier
| to entry to maintain high wages (as compared to brick
| laying or plastering, for example, which you simply can't
| do to a professional level without years of practice).
|
| So most of the value delivered by an experienced
| electrician is in talking to clients and identifying the
| correct technical solution, and is therefore pretty much
| analogous to the value delivered by software developers.
|
| Therefore if we accept the logic that software developers
| will no longer be required (or that their value will be
| greatly diminished) it's hard to see how that wouldn't
| apply to electricians too (in the sense of being a well-
| paid trade over and above your average manual job).
|
| (Btw - I DON'T think either will happen, but I just think
| electrician is a weird choice of example for those that do
| think that)
| orsenthil wrote:
| > I reckon the safest jobs are ones
|
| The safest jobs are ones that honest to self for the doer.
| He/She will be able to create value either using other
| humans, or machines and continue to do.
|
| Sorry, I wanted to try "safest jobs are ones that involve
| politics", while those will always be present, it is not the
| safest and wont be many available, so changed to more
| abstract answer.
| pbalau wrote:
| You know what a tradesman does today and an AI cannot? Get
| under the sink, undo that rusted bolt, route that cable in
| that awkward position and so on.
| drclau wrote:
| Too many people make the mistake of thinking there's an
| infinite number of sinks to get under.
| ChuckNorris89 wrote:
| There is though. In Europe finding a plumber that will
| take you can have you wait weeks pricelessly because
| those sinks in existence keep breaking down.
| ilaksh wrote:
| Advanced robots will be able to do that in less than five
| years. Inexpensive ones in less than 20 years.
| revelio wrote:
| I wonder if there's enough info about how to do tradesmen's
| jobs online for that to happen. Programmers are at risk
| because we filled the internet with free training materials
| but many jobs aren't like that especially anything with a
| physical component.
| acjacobson wrote:
| I don't know if it is enough yet but there has been an
| explosion of this kind of content on Youtube over the last
| 10 years. For typical home repairs it seems most topics are
| pretty well covered.
| Al-Khwarizmi wrote:
| This is an interesting point. A family member of mine is
| what we call here a medical evaluator - not sure if it has
| a direct equivalent in e.g. the US and how it is called
| here, but those are doctors who assess the disabilities of
| workers who apply for a pension due to illness or accident.
| This involves exploring the patient and then making the
| decision and justifying it in a report. The latter two seem
| like tasks that LLMs should be able to do easily.
|
| However, we tried a description of a fake case to see what
| Bing could do, and it couldn't do much. And I think the
| reason is that there are very detailed documents on the
| rules that they follow for their decisions, but these are
| not online - they are in a private intranet and they can't
| take them out of there. If Bing had access to those
| documents I don't think it would have much of a problem.
|
| So maybe a way for workers to protect themselves from being
| replaced by AI is not uploading much information about
| their jobs to the Internet... I wonder if this will lead to
| a situation like guilds in the middle ages, treating job
| skills essentially as secrets.
| ilaksh wrote:
| There is no reason to expect robotic technology to halt. Look
| at what things like Tesla or Boston Dynamics robots can do.
| Eventually we will see very well articulated and high
| strength to weight ratio robots integrated with advanced AI
| systems. It is definitely not going to take 25 years.
|
| If you look at what's happening today, in 25 years it seems
| plausible that fully autonomous superintelligent androids
| with much more dexterity than humans will be fully in control
| of the planet.
| WithinReason wrote:
| AI can only automate things for which a training database
| exists
| drclau wrote:
| > AI won't take plumber or electrician jobs away in the
| foreseeable future.
|
| I see this argument way too often. How many electricians and
| plumbers does the world really need? And if the market becomes
| over saturated, how much will those jobs pay? How often will
| you actually have a job to do?
| chii wrote:
| > And if the market becomes over saturated, how much will
| those jobs pay?
|
| AI makes everything else so cheap, that by just working for a
| small bit of time, you can afford all the necessities?
| dmpk2k wrote:
| That's been the promise of increasing productivity for many
| decades, and yet...
| chii wrote:
| if you were to life with only what was around at the time
| such productivity increases occurred - e.g., no internet,
| phone, and high speed computing, access to
| medical/transport improvements etc - you'd probably be
| able to live off minimum wage easily.
|
| People worked more today because they desired more. In
| fact, the desire outstripped their ability to increase
| their wages!
| [deleted]
| jxi wrote:
| [dead]
| Veuxdo wrote:
| Hint: if you're ever described what you're working on using the
| phrase "... in Python" or "... in Rust", this probably applies to
| you.
| thequadehunter wrote:
| I hear this kind of stuff all the time working in IT. A
| surprising amount of people think they gotta "learn docker" and
| "learn Powershell" and "learn AWS" and it just doesn't make any
| sense to me. Just learn the basics well and apply it to whatever
| you're doing when necessary. AWS will come out with their weird
| lingo for stuff that already exists and all you need to do is map
| it to the concept you already know.
| Mizoguchi wrote:
| Software Engineering is just 10% writing code.
|
| The other 90% is understanding specifications of requirements
| (sometimes even helping customers write them), produce detailed
| functional specifications, cost analysis, prototyping, meeting
| with third party vendors over interface design specifications,
| determine the project's scope, testing, delivery, integration and
| commissioning, bug fixing, identifying and managing scope changes
| among other things.
|
| AI may help you complete some of these tasks more effectively,
| but at the end of the day it will be just another tool in your
| kit.
| suprfnk wrote:
| > Software Engineering is just 10% writing code.
|
| That really depends on the type of "Software Engineering"
| you're doing. In my experience, in greenfield projects, boring
| CRUD-type programming can easily take up 50% of your time.
|
| It'd be great if AI could automate this boring CRUD-type
| programming away, and let me focus on the architecture and
| interesting algorithms.
| lakeshastina wrote:
| Programming jobs will not disappear, but it will not be similar
| to what programmers of today do on a daily basis. So, I think the
| significant shift needs to happen in the way we are educating
| kids about CS fundamentals, Math and Science.
|
| As AI systems become more able to generate much code by default,
| the expectations of the customers will similarly increase. Just
| remember how much an IDE like Eclipse or IntelliJ changed the
| productivity of programmers 20 years ago. Similarly, how easy
| apps were to build when Rails would create a scaffold with a
| simple command. It only allowed us to build more complex customer
| experiences in the end. This will continue.
|
| Second, there is the need to verify the output from such systems,
| and also tie them together with other modules. In large
| enterprises, they would also need to be integrated into existing
| codebases, often legacy infrastructure.
|
| Then comes the implementation of tons of Govt regulations in
| finance, lending, taxes, medicine, and so on as code. Software
| has not yet penetrated these verticals as well as they can. In a
| recent podcast, chamath palihapitiya mentioned that now it is
| possible for the Dodd-Frank regulations to be in code, versus as
| a written document. It's a good example.
|
| Lastly, there are THOUSANDS of companies with legacy software
| systems that will still need to be maintained, and transitioned
| to modern technology stacks and infrastructures. This space will
| continue to be a major source of employment for programmers for
| the next few decades.
| karmasimida wrote:
| So much this.
|
| We, as software engineers, build software to deliver values, to
| accomplish certain goals.
|
| It doesn't reside in typing the code out.
|
| Be the devil's advocate, that part of the job is boring.
|
| AI tools will come in and take over whatever they could take over
| from this moment forward.
| seydor wrote:
| I can definitely see writing games with natural language in the
| near future. Not everyone can do that of course, but they don't
| need to be programmers either, just people who are into the
| thing.
| yread wrote:
| I work a lot with doctors where there also worries about being
| replaced with AI. The pioneers there say that won't happen but
| doctors who use AI will replace the ones that don't. Same thing
| in our field. And just like in our field there will always be
| niches where AI output isn't good enough and there isn't enough
| money to improve it so human specialists will own it.
| RivieraKid wrote:
| Doctors? They're safe. They interact with patients, use
| physical tooling, demand for their work is growing because of
| demographics. And you really don't want a doctor who sometimes
| hallucinates advice.
| yread wrote:
| Sorry, should have been more specific. I work with
| pathologists. They never ever talk to anyone much less a
| patient :-)
| dw_arthur wrote:
| Doctors won't be replaced by medGPT. They will be replaced by
| a nurse practitioner using medGPT. The financial incentives
| to replace highly paid doctors with nurse practitioners are
| just too high.
| impalallama wrote:
| Jesus this title lmao. I thought this was a statement from
| Carmack when the actual tweet expresses the exact opposite.
| bilekas wrote:
| I don't know why the title was changed. It made much more sense
| earlier.
| dumbfounder wrote:
| I think of AI as simply a productivity tool, and it is here to
| make everyone more productive, like Google did 25ish years ago.
| Google may have put some out of a job, but it made everyone much
| more efficient. This is a good thing. Work weeks are shortening
| around the world and this will help us maintain productivity as
| we work less. (Fingers crossed)
|
| https://ourworldindata.org/working-hours
| tikkun wrote:
| Very misleading title. Implies that Carmack said that, which
| isn't true.
| MrGilbert wrote:
| Agree, that title before wasn't really great to read, but at
| least got the point right.
| lyleVanf wrote:
| I think something that a lot of people might be overlooking is
| just how much this might devalue software as individual products.
| How many technologies do we have now that might become irrelevant
| once LLMs become more mainstream? How can any company keep a moat
| around their product if anyone can simply generate that same
| function (or similar) with a few prompts? The only reason any
| software is particularly valuable is because of the difficulty
| that comes with making it.
|
| An example that come to mind is Jira, why have verbose task
| management software when bespoke task management systems become
| even more viable for individual companies? Or better yet, given
| the need for individual cogs decreasing, why have that at all?
|
| This also extends to the creation of any sort of new business,
| perhaps there are patents on specific products and brands (which
| might be the saving grace of many large orgs) but outside of
| niche applications and hardware access I can't see how someone
| can reasonably gain a leg up in such an environment.
|
| edit: This is more speculative, but what if software actually
| becomes more of a process of molding a large language model to
| consistently behave in a certain way? Why have some code that
| manages your back-end functionality when for a large some of
| applications all that is really occurring is text manipulation
| with some standardized rules. If those rules can be quantified,
| and consistency can be expected, the only "coding" that needs to
| be done is prompting the model.
| kashnote wrote:
| Lots of people saying that a programmer's job is more than just
| writing code, and I agree. But consider this:
|
| You give ChatGPT-58 some startup idea and ask it to incorporate
| the company, build the software, do the marketing, etc. It starts
| doing a pretty good job. It's in charge of the whole system, so
| it doesn't need human intervention to give it context. The
| company grows and is making $1M/yr. It has now replaced 10
| potential jobs in the market.
|
| I feel like that's the worry many folks have. It's a pretty
| dystopian view of the future but if you can make $1M/yr and not
| have to pay any employees that money and all you had to do was
| pay OpenAI about $100/mo, would you not do that?
| thomastjeffery wrote:
| Real Artificial Intelligence? Yeah, that would definitely factor
| out a lot of the wasted work we call "engineering".
|
| Language Learning Models like GPT? Not even close.
|
| We should absolutely stop calling those "AI". They are not
| intelligent. They _model_ intelligent behavior: human writing.
|
| We should probably even stop calling them "Language Learning".
| They don't know or care what language is: they learn whatever
| patterns are present in text, language or not.
|
| Text contains the patterns that humans identify as language; but
| those aren't the _only_ patterns present in text: which means
| language is not the only thing being modeled by an LLM. There is
| no categorization happening either: a pattern is a pattern.
|
| There is this religious cult surrounding LLMs that bases all of
| its expectations of what an LLM can become on a _personification_
| of the LLM. They say that we can 't possibly understand the
| limitations of this method; therefore there are no limitations at
| all! It's absurd.
| soraki_soladead wrote:
| > They are not intelligent.
|
| Citation needed. Numerous actual citations have demonstrated
| hallmarks of intelligence for years. Tool use. Comprehension
| and generalization of grammars. World modeling with spatial
| reasoning through language. Many of these are readily testable
| in GPT. Many people have... and I dare say that LLMs reading
| comprehension, problem solving and reasoning skills do surpass
| that of many actual humans.
|
| > They model intelligent behavior
|
| It is not at all clear that modeling intelligent behavior is
| any different from intelligence. This is an open question. If
| you have an insight there I would love to read it.
|
| > They don't know or care what language is: they learn whatever
| patterns are present in text, language or not.
|
| This is identical to how children learn language prior to
| schooling. They listen and form connections based on the
| cooccurrence of words. They're brains are working overtime to
| predict what sounds follow next. Before anyone says "not from
| text!" please don't forget people who can't see or hear. Before
| anyone says, "not only from language!" multimodal LLMs are here
| now too!
|
| I'm not saying they're perfect or even possess the same type of
| intelligence. Obviously the mechanisms are different. However
| far too many people in this debate are either unaware of their
| capabilities or hold on too strongly to human exceptionalism.
|
| > There is this religious cult surrounding LLMs that bases all
| of its expectations of what an LLM can become on a
| personification of the LLM.
|
| Anthropomorphizing LLMs is indeed an issue but is separate from
| a debate on their intelligence. I would argue there's a very
| different religious cult very vocally proclaiming "that's not
| really intelligence!" as these models sprint past goal posts.
| [deleted]
| thomastjeffery wrote:
| > hallmarks of intelligence
|
| All through the lens of personification. It's important to
| take a step back and ask, "Where do these hallmarks come
| from?"
|
| The hallmarks of intelligence are _literally what is encoded
| into text_. The reason LLMs are so impressive is that they
| manage to follow those patterns without any explicit
| direction.
|
| > I dare say that LLMs reading comprehension, problem solving
| and reasoning skills do surpass that of many actual humans.
|
| People tend to over-optimize reading comprehension by
| replacing what they are reading with what they _predict to
| be_ reading. Every person has a worldview built out of prior
| knowledge that they use to disambiguate language. It takes
| effort to suspend one 's worldview, and it takes effort to
| write accurate unambiguous language.
|
| An LLM cannot have that problem, because an LLM cannot read.
| An LLM models text. The most dominant patterns of text are
| language: either the model aligns with those patterns, or we
| humans call the result a failure and redirect our efforts.
|
| > Anthropomorphizing LLMs is indeed an issue but is separate
| from a debate on their intelligence.
|
| How could that even be possible? The very word,
| "intelligence" is an anthropomorphization. Ignoring that
| reality moves the argument into pointless territory. If you
| try to argue that an _anthropomorphized LLM_ is intelligent,
| then the answer is, "No shit, Sherlock. People are
| intelligent!" That doesn't answer any questions about a real
| LLM.
|
| > as these models sprint past goal posts.
|
| Either an LLM succeeds at a goal, or it fails. It has no idea
| what the difference is. The LLM has no concept of success: no
| category for failure. An LLM has no goals or intentions, and
| doesn't make a single logical decision.
|
| So what is its success coming from? The text being modeled.
| Without humans authoring that text, there is no model at all!
|
| The goals are authored, too. Every subject, every decision,
| every behavior, and every goal is determined _by a human_.
| Without human interaction, the LLM is nothing. Does nothing
| think? Does an arrow _find_ its target? Of course not.
| roflyear wrote:
| Citation needed for you!
| soraki_soladead wrote:
| Sure. A few below but far from exhaustive:
|
| - https://arxiv.org/abs/1909.07528 -
| https://arxiv.org/abs/2212.10403 -
| https://arxiv.org/abs/2201.11903 -
| https://arxiv.org/abs/2210.13382
|
| There are also literally hundreds of articles and tweet
| threads about it. Moreover, as I said, you can test many of
| my claims above directly using readily available LLMs.
|
| GP has a much harder defense. They have to prove that
| despite all of these capabilities that LLMs are not
| intelligent. That the mechanisms by which humans possess
| intelligence is fundamentally distinct from a computer's
| ability to exhibit the same behaviors so much that it
| invalidates any claim that LLMs exhibit intelligence.
|
| Intelligence: "the ability to acquire and apply knowledge
| and skills". It is difficult to argue that modern LLMs
| cannot do this. At best we can quibble about the meaning of
| individual words like "acquire", "apply", "knowledge", and
| "skills". That's a significant goal post shift from even a
| year ago.
| naiv wrote:
| 'Programmers' which are scared of ChatGPT, Copilot etc. would be
| scared as well of their IDE if they would ever read the manual of
| what is already easily possible with the tool they use daily
| borissk wrote:
| Horses which are scared of cars and trucks should just learn
| how to run faster...
| thecrumb wrote:
| Still waiting for flying cars, paperless office, robots that will
| steal my job and countless other promises of utopia.
| optoman wrote:
| I think that the true nature of what Carmack is alluding to here
| is that true Value, even in the domain of software engineering,
| is usually attained by the application of critical thinking. The
| notion that a person who knows how to form correct syntax is
| equally as productive as someone who understands the problem a
| business or user faces and can come up with a working technical
| solution to that problem does not stand up to scrutiny. Its like
| saying someone who knows how to wield a pencil is equally as
| capable as Tolstoy in the discipline of writing. An LLM that can
| code is the same but the pencil wielder will be exposed as one
| who adds no value and Tolstoy will become even more powerful.
|
| I predict that the real and more radical problem than some Stack
| Overflow Copiers losing some marketability is when Product and
| Management start buying the idea that the technical domain is
| something that doesn't need to be well understood anymore because
| we have an LLM that keeps coming up with plausible answers. I
| work in mortgage technology where there is a great deal of
| thought and discipline that needs to go into the technical
| modeling of who gets underwritten for a mortgage. Imaging a
| mortgage company that built its underwriting rules and models
| using an LLM with you as the head LLM seance holder. All of the
| sudden a mass of customers got denied mortgages for some unknown
| reason and Management comes to you to ask what happened.
|
| Would you know what happened? Could anyone even know what
| happened?
|
| "Sorry, Customer! We actually don't know anything about what we
| built or how it works."
|
| LLMs may eventually eliminate the act of typing code but the real
| question is will they eliminate the need for critical thought.
| tunnuz wrote:
| Wisdom.
| asah wrote:
| First off, there's a lot of people shooting off their mouths -
| ignore anyone who hasn't used ChatGPT extensively: it takes some
| training to learn to use it.
|
| Several senior developer friends have been using ChatGPT quite a
| bit and it seems to work well in lots of places: - isolated
| algorithms and fiddly bits - it writes complex SQL statements in
| seconds, for example. LLMs should makes quick work of fussy
| config files. - finding, diagnosing and fixing bugs (just paste
| the code and error message - really!) - unit tests and examples -
| comments and documentation
|
| Professional developers will recognize that we're talking 50-90%
| of the LABOR-HOURS that go into software development, and
| therefore fewer developers to get the same work done. Sure, we
| just do more - but then we quickly hit other speed limits, where
| coding isn't the problem. I can see layoffs among the bottom-N%
| of developers, while more sophisticated developers add LLMs to
| their toolbox and use this productivity to justify their high
| $/hour.
|
| I see AI writing code that casual human readers don't really
| understand, but this is OK because the AI includes comments --
| just like developers do for each other today.
| yoyohello13 wrote:
| I'm sure my org isn't unique, but we are constantly at max
| capacity and we have no money to hire new people. We have
| projects in the queue that will keep us occupied for years. I
| don't think even a 50-90% speed up will lead to lay offs. We
| will just finally be able to get more shit done.
| htag wrote:
| The backlog grows at a faster pace than the company completes
| work. The backlog is never meant to be completed. Your job
| security is not based on having a long well groomed backlog.
| alphadog wrote:
| What prompts are they finding useful for creating SQL
| statements?
| asah wrote:
| I tell it that I'm using <database and version> and give it
| the relevant DDL statements (e.g. CREATE TABLE, etc) then ask
| it to write the query to do <x> in plain English. It does
| surprisingly well.
|
| But!!! the first response is rarely dead-on and instead, just
| like a junior eng I need to guide it: use (or don't use) SQL
| construct <x>, make sure to use index <x>, etc.
|
| Example: to sum the values in a JSONB field, GPT desperately
| wanted to use a lateral join but that would have made for a
| very awkward set of VIEWs. So instead I directed it to create
| a function to perform the summation.
| usaar333 wrote:
| GPT-4 is simply outstanding at writing SQL statements. I made
| a bunch of examples with non-trivial customer revenue metrics
| assessments:
|
| https://www.dropbox.com/s/hdhycf7l00d3sx8/gpt4_attempt_sql_q.
| ..
|
| It can do basic math reasonably well (and this is achieving
| generation where GPT-3 failed). Interestingly, asking it to
| verify itself does resolve bugs sometimes. Managed to fix
| subtle count() denominator bugs and an inflation-adjustment
| error with not much hinting on my end.
|
| You can only see it struggle really hard at the end when it
| tries normalizing month ranges correctly. It seemed to reach
| conceptual problems over how LAST_DAY() was being used and
| current debug itself.
| Zetice wrote:
| Sorry but no, ChatGPT can only do some very specific and
| specialized tasks, it doesn't save meaningful time. It's a tool
| in the toolbox, but it's not a game changing tool; just one
| more thing to reach for when you need a complex transformation,
| or when you need to unblock yourself.
|
| Zero developers will lose their jobs due to LLMs. That's just
| yet more needless hype and expectation.
| jasondigitized wrote:
| If ChatGPT / GPT-4 or future versions can write unit /
| functional / integration tests, that's an absolute productivity
| game changer.
| Aperocky wrote:
| > Professional developers will recognize that we're talking
| 50-90% of the LABOR-HOURS
|
| More like 20-30% at max. And it's not including debugging the
| output of chatGPT, which I've found that it has been making
| subtle mistakes - which will probably take away all of the time
| gained.
|
| Writing code isn't the biggest time sink, figuring out what to
| write is.
| _fat_santa wrote:
| Like you I found that ChatGPT is not really all that great at
| coding, but great when you ask it to do very specific grunt
| work. I'm working on a new database and one thing I found it
| super useful for is generating test data, I would just tell it:
| "here's the CREATE TABLE statement, create 50 rows of test data
| off of it, with all of these specifications: this has to be
| this, that can only be 1 or 2, yada yada yada.
|
| > Professional developers will recognize that we're talking
| 50-90% of the LABOR-HOURS that go into software development,
|
| I call it 'dumb coding'. You have a type of programming that
| requires you to really think, and then there's the type where
| you just need to write 200 lines of code but you know exactly
| what to write. If AI could pickup the slack on 'dumb coding'
| and let us think about 'smart coding', we would all be way way
| more efficent.
| roflyear wrote:
| Gpt4 is on another level but still very far from being able
| to do work on anything larger than a medium sized class
| soheil wrote:
| gpt-4-32k is yet on another level. I think a gpt-4-32m
| would replace any senior engineer working on a complex code
| base.
| mouzogu wrote:
| People will always want things cheaper and faster (value)
|
| - looking at AI as "chat" or textbox or AI autocomplete is wrong
| imo
|
| - companies will come that utilise AI to deliver things faster
| and cheaper
|
| - you quote $10k and 1 month, we will do it in $1k 1 week
|
| - first to go is low value difficult to automate work which is
| already offshored and commoditised (basic webdev, graphic design)
|
| - later the middle tier of work, that requires internal context
| (once whitelabel NDA'ble bespoke AI solutions are mainstream)
|
| ----
|
| There will always be need for top tier leetcoders. but barrier to
| entry will get much higher.
| Yoric wrote:
| > There will always be need for top tier leetcoders. but
| barrier to entry will get much higher.
|
| That's definitely a possibility. The barrier to entry won't be
| just in terms of how much you need to know, though, but also
| the fact that society as a whole might not require that many
| top tier leetcoders.
| sokoloff wrote:
| There will be an effectively endless demand for anyone who
| can create 10x the value while only demanding 3x the pay of
| the typical worker.
|
| I don't see AI changing that principle.
|
| Most everyone who is "good at programming" today will be
| totally fine (and maybe even _better off_ ).
| Yoric wrote:
| > There will be an effectively endless demand for anyone
| who can create 10x the value while only demanding 3x the
| pay of the typical worker.
|
| That's one possible future.
|
| Another future is one in which a single AI assisted "full
| tech developer" can solve all the coding problems of a
| company within say, one week. No company will require the
| services of an entire "full tech developer", so society
| will employ roughly one freelancer / consultant per ~30
| large enough companies.
|
| That's a smaller market than today's.
| sokoloff wrote:
| Do you think that, within a week, anyone could even
| _understand_ all the coding problems of a company large
| enough to employ an SWE today?
| Yoric wrote:
| I've been interviewing with a lot of companies in the
| past few weeks. I'll agree that one week is probably
| optimistic, but most of them seem to have requirements
| that are in fact pretty simple.
|
| Admittedly, reaching the one week stage might require
| better programming platforms and APIs.
| matwood wrote:
| On one hand you're right, but you're missing the other side -
| business requirements have always expanded with each
| innovation.
|
| Basic web dev has been commoditized for awhile, but a company
| of any size requires much more than basic web dev.
|
| Humanity has an uncanny ability to devour any excess capacity,
| and I see no reason it won't do the same with things AI makes
| easier/lower cost.
|
| What I do worry about is that AI will put further pressure on
| the lower skilled jobs keeping many people afloat. Call centers
| for example, likely will not exist very soon.
| fhd2 wrote:
| "Humanity has an uncanny ability to devour any excess
| capacity"
|
| Well put!
|
| You could also see the pressure on lower skilled jobs as a
| positive thing in that light, I suppose: Right now it's not
| super cheap to run a terrible call center. If five years from
| now you'll essentially get that for free, companies might see
| an opportunity to rise above "cheap and terrible" to be more
| competitive, which would likely create jobs again. Jobs
| which, however, don't have the main goal of being done cheap
| and terribly. A way more level playing field than what we
| have today, where companies compete by chosing the areas in
| which they want to shine.
|
| Maybe terrible writing, art and customer service becoming
| nearly free _can_ be a tipping point of sorts. It only
| wouldn't be if nobody cares, then the market will adjust for
| that. But I'm not giving up on consumers just yet.
| mouzogu wrote:
| Yes. I see AI as a good thing.
|
| - it allows us to focus on the goals and not the process
|
| - we still need people who understand the PROCESS
| (developers)
|
| - people who can build tools that bridges the gap between
| AI generated zero value spam and bespoke human created
| content.
|
| - we need tools to introduce subtlety and craft to AI
| outputs
|
| - those are things that are important in a high saturation
| low margin environment - attention to detail (consider
| videogame crash of 1980s).
|
| At the moment if i generate image with SD very difficult to
| change something specific in a nuanced way. that is where
| tools will emerge made by good developers. high value work
| is nuanced and often extremely subtle.
|
| An increased ability to introduce subtletly will also open
| up new niches of interest. because the creator can focus on
| expressing their ideas in very personal ways.
| quonn wrote:
| The leetcoders will get replaced first. There will be a need
| for designers of complex systems and possibly also algorithms
| for some time.
| ivan_gammel wrote:
| AI can make some jobs obsolete. This was a promise of no-code
| tools too, but they failed, because they were tools, not
| solutions. I've seen plenty of startups doing the same thing
| again and again: building conversion funnel, setting up some
| basic app where customers can register and receive some service.
| Outside of their USP, the jobs to be done for end users are in
| the 99% of cases the same. In 2023 this should have not required
| any engineering or even advanced configuration effort, yet there
| it is. We see lots of CMS, CRM and other half-baked or too
| enterprise-focused systems which deliver a tool rather than a
| solution to the end user problem. And a tool needs an operator
| for it. Startup founders should not need a dedicated person in a
| performance marketing team to launch some basic campaign on
| Facebook or in Google, get and convert website visitors etc. It
| must be a content problem, not a technical problem to solve. But
| no-code simply sucks and we still hire people to set up GA,
| Zapier, Hubspot and Squarespace website. The barrier is still too
| high. Why? Good solution must guide and educate people on how to
| use the tools. It must offer reasonable defaults. It must suggest
| content. It must suggest operational processes optimized for the
| specific use case. It must cover that use case end-to-end,
| without requiring users to find out how to complete the remaining
| 10% of task (often a very big uncertainty).
|
| All of this can and must be achieved with the help of AI. AI is
| THE missing component in no-code. What if CMS auto-filled SEO
| metadata based on the page content? What if CMS provided
| usability heuristics? What if CRM proactively suggested the email
| engagement campaign based on the funnel performance? What if all
| those tools detected their usage patterns and educated users on
| how to improve productivity and introduce best practices in their
| work?
|
| We do not need engineers to build a login or user profile page,
| this is a very stupid way to spend the money. Yet there are
| plenty of them which still build login and user profile pages.
| They must loose their jobs. But AI creates a lot of opportunities
| for those, who want and have intellectual capability to work on
| more interesting tasks: just integrating AI and offering great UX
| is an enormous challenge for the next two or three decades. Even
| if some work becomes redundant soon, there's still enough to keep
| even the youngest generations of software developers busy until
| their retirement.
| _-____-_ wrote:
| I'm less worried about AI replacing my job, and more excited
| about how much more I'll be able to accomplish with AI. It's a
| multiplier.
| sys_64738 wrote:
| Companies hate any skills set that can hold them hostage in the
| long run. Today that is the need for programming skills which is
| why salaries are so high. But the moment that a replacement for
| most programmers occurs, whether automation, AI, or zero code
| needed, employers will dump programmers before they can compile
| their last line of code successfully. In essence, this is the
| golden age for programming and the cliff could be just over the
| horizon.
| mikewarot wrote:
| It's _bad management_ that hates any people with skill set that
| equalize the power relationship, because those skilled in the
| arts might hold them accountable. If you 're roaming the earth
| making the big bucks, but barely containing your imposter
| syndrome, the last thing you want is interaction with someone
| with actual competence.
|
| Companies are legal fictions, and have no motives because they
| don't actually exist, the people who control the resources of
| the company are the ones to worry about.
| m3kw9 wrote:
| Product manager: we need api 23145.1 be able to talk to api
| 83316..
|
| This is something a product manager would never do, it still
| require a technical person to translate a business logic
| accurately for the AI to build.
|
| Look to how 3d animators use GUIs to build, previously they had
| to use a lot more code, but the expertise needed for a good job
| are still highly sought after. This is what could happen to
| software
| albertopv wrote:
| I have to deal with continuously changing requirements, dozens of
| microservices, multiple type of DBs, client integrations with
| endless techs, clients support tickets written with an obscure
| human language where you have to guess what they meant.
|
| AI is not going to take over these things anytime soon, if ever.
| [deleted]
| flappyeagle wrote:
| It will make some CS jobs obsolete. Hopefully it will create new
| ones.
| jackblemming wrote:
| If programmers can be replaced by AI, so can every other white
| collar job and humanity will look very different than what it is
| now. And I've been using ChatGPT and copilot and it's a nice tool
| but nowhere near a replacement for knowing how to program.
| pjmlp wrote:
| Right now it is a kindergarten child at cognitive level, and
| like humans it will grow up and evolve, unless we nuke
| ourselves before it happens.
| jstx1 wrote:
| Children don't have the same level of general knowledge in
| kindergarten. Adults don't have it either, not even the best
| adults.
| danparsonson wrote:
| It's a word association machine, it's not even at
| kindergarten level when it comes to general cognition.
| pjmlp wrote:
| Regardless, it is foolish to belive it won't improve and
| take over tasks that in 2023 it still isn't able to do.
|
| Who would guess, beyond Hitchhiker's Guide to the Galaxy,
| that I can use my phone in 2023 to translate anything on a
| foreign restaurant in real time.
| danparsonson wrote:
| It will of course continue to improve at what it does but
| I personally think it's unrealistic to assume it will
| somehow spontaneously develop generalised cognitive
| ability; there are surely limits to how far this
| particular approach will take us.
| avereveard wrote:
| right now it will only solve problem someone else already
| knows the solution to, so not even at that.
|
| that said, I expect an ai assisted clerk to be order
| magnitude faster than not. it will be though for people at
| the bottom of the learning curve for a bit, but in half
| generation the educational offering will include how to work
| toghether with ai, massively improving worker productivity
|
| that is not to say it won't have negative impact. there's so
| much job that we need, currently programmer are in high
| demand, and it's the one of the highest paying jobs, but that
| will change, possibly dramatically. I expect people at the
| top of the chain to be in trouble first (architects and
| whatnot) because they are the least creative and the one that
| possibly require the most knowledge, things ai do
| exceptionally well as of today.
| [deleted]
| pjmlp wrote:
| Having helped companies in traditional line of business to
| "streamline" their work processes, I am quite clear that
| many of the stuff that the West outsources into Asian
| countries will eventually be outsourced to AI.
|
| It is already so that in many industries there are hardly
| any traditional coder jobs on site, and having to explain
| every step to avoid it going off rails in offshoring
| assignments is hardly going to be any different to explain
| it to an AI, when it gets good enough.
|
| Maybe by GPT-20 only, but it will come, and then the roles
| of architects and business analists are the only ones left.
| execveat wrote:
| There's no reason this couldn't happen, but surely it
| would require something better than LLM.
| Yoric wrote:
| > Maybe by GPT-20 only, but it will come, and then the
| roles of architects and business analists are the only
| ones left.
|
| Agreed. Although... we actually don't know which jobs
| will be the ones left. For all we know, it will have
| taken over business analysis, too. Plus, from my
| experience working on architecture, I suspect that
| automatizing that part of my job won't be too hard.
|
| For all we know, the only jobs left will be nurse and
| deep sea miner.
| raincole wrote:
| It's not kindergarten child at congitive level. It's a
| different kind of cognition than humans, if you can call it
| "cognition" at all.
| _-____-_ wrote:
| The lesson I've taken from ChatGPT so far is that
| "consciousness" may be much less interesting or "special"
| than we thought. It may turn out that it's nothing more
| than an emergent property of a shared language that models
| the world around us and gives us capacity to plan and
| communicate our actions.
| pjmlp wrote:
| Regardless, it is foolish to belive it won't improve.
| Ygg2 wrote:
| It's foolish to believe it will improve indefinitely or
| reach human parity.
|
| Wasn't AI driving just few years away for 5 years?
| sneak wrote:
| It has no cognition. Do not personify the algorithm that
| cannot think.
| pjmlp wrote:
| It doesn't need to think per se, just like robots at an
| Assembly line, or algorithms at HFT, it only needs to do a
| good enough job.
| pcthrowaway wrote:
| This person is asking about career prospects 10-15 years out
| though.
|
| I'm sorry, but the landscape in then might be as alien to
| someone asking today, as todays would have been to someone
| asking 15 years ago (2008).
|
| What John said is correct, but personally I think he's
| underplaying how much people could be affected. Those "product
| skills" take years of grinding to really sharpen, and in 15
| years only a few people might actually be needed to apply them
| scaramanga wrote:
| > I'm sorry, but the landscape in then might be as alien to
| someone asking today, as todays would have been to someone
| asking 15 years ago (2008).
|
| Hahahah. Yes. Who could have foreseen the trailblazing
| advances in the tech industry such as "television, but over
| the internet", "booking rooms, but via a website" or "posting
| messages on a forum"
|
| Don't forget the stuff powering it: "RPC, but over HTTP",
| "scripting languages, but compiled", or "Key-value stores"
|
| If only I had dared to dream.
| r_hoods_ghost wrote:
| 2008 was extremely similar to today, although the webdev
| ecosystem wasn't quite as degenerate. I'd say you'd have to
| go back to the pre internet era to find a work environment
| that was fundamentally different.
| csande17 wrote:
| Was the tech landscape much different 10-15 years ago? This
| is a genuine question; the iPhone App Store was really the
| last "big thing" to happen to the industry in my mind, and it
| came out in 2008.
| raincole wrote:
| No much different. If you were an intern Java programmer 10
| years ago, it's totally possible that you're still a senior
| Java programmer today.
| pharmakom wrote:
| 2008 wasn't so different was it? I guess the big new thing in
| work since then is the "gig economy".
| pzo wrote:
| The whole mobile economy pretty much started in 2008. First
| iPhone was released in 2007 but App Store was lunched in
| 2008. This changed landscape dramatically even if you
| consider software development. Before 2008 you were fine
| with writing just windows only desktop app in Delphi - no
| smartphones, tablets, smartwatches, smart tvs and could
| leave out supporting macOS or Linux
| pcthrowaway wrote:
| Well I'm in web dev (though I was studying CS in 2008) and
| the 2008 landscape had almost none of the same things.
| jQuery was not yet a household name, let alone SPAs.
| Facebook had barely 100 million users. Marc Andreessen yet
| hadn't written about "software eating the world".
| Personally I was more optimistic. If anything, the last 15
| years have seen the growth of an attitude of tech
| "entitlement" because hackers got to the a lot of the ideas
| that now seem obvious in hindsight before a lot of the big
| corps could.
|
| I'm sure there's still room for innovation, but I think a
| lot of it going forward will be driven by rapid improvement
| in AI capabilities.
|
| In 2008, tech wasn't everywhere. iphones were brand new and
| very few people had them.. There was no "mobile browser"
| market share (though we did have SMS gateways). 77% of the
| global population hadn't even been on the internet yet.
|
| AI looks like it's going to be at the forefront of the next
| big wave of fundamental changes to society, and it's really
| hard to predict where that will lead us. But I suspect it's
| going to become apparent that this relatively brief period
| of tech-elite empowerment was a historical anomaly, because
| the AI underlings are going to be willing to do a lot more
| work with none of the coddling, and they're going to
| improve very quickly.
| coffeebeqn wrote:
| I totally don't see that. If you showed me AWS and modern
| machines and Go and React in 2008 I would certainly see
| that yeah there was some incremental progress but by no
| means would my mind be blown. Not much has changed. We
| still write clients and servers and use http and most of
| the same languages are still popular but slightly
| updated. Databases are essentially the same.. how good
| phones are would probably be the most exciting thing
| apart from GPT.
|
| Or typescript! I was writing actionscript 3 in 2008 which
| is essentially the same spec
| jmull wrote:
| Regarding 2008 vs 2023... how to view it probably depends
| on where you were in your career in 2008. To me 2008 ->
| 2023 looks like mostly shifting details.
|
| SPAs certainly were a thing back then, it was just called
| AJAX. (Not to mention the desktop apps that were,
| architecturally, almost the same thing.) jQuery was a
| _response_ to the popularity of putting interactivity in
| the browser, not a precursor.
|
| The questions remain the same, not just from 2008, but
| going back a long ways... Where does the code live? How
| is it transformed to execute, where does it execute and
| how it is moved there? Where does the data live, how is
| it transformed and moved to where it needs to be
| displayed, and how are changes moved back? When some of
| the answers shift, due to changing network capabilities,
| processing capabilities of nodes, or scaling needs, it's
| doesn't really change the overall picture.
| revelio wrote:
| Huh? Gmail was a spa and that dates from 2004. I don't
| think a VC writing a blog post says much about how tech
| had changed. Smartphones existed before the iPhone and
| Android, they just weren't as popular.
| pcthrowaway wrote:
| The term "SPA" wasn't in use until at least after 2009
| and gmail was probably using some hacky AJAX
| (XMLHTTPRequest wasn't even standardized until, what,
| 2006?). Chrome wasn't launched until 2008 so they weren't
| able to get away with just adding the APIs they needed
| into the browser. Backbone wasn't even released until
| 2010 and Angular probably wasn't conceived of internally
| until late 2008.
|
| Yes, gmail might have had some SPA-like behaviour in
| 2004-2006 but it was nothing like what we have today.
| Pretty sure I got access in 2005 because I knew someone
| who worked at google, and it was mostly doing full
| refreshes between actions at the time, like pretty much
| the entire rest of the web
| revelio wrote:
| SPA is just an abbreviation of "single page application"
| and only means a web app that doesn't do full page
| reloads, it doesn't require the use of any specific
| framework, so Gmail definitely qualifies and it used it
| from version one. It wasn't even the first,
| XMLHttpRequest was created by Microsoft for Outlook Web
| Access and it shipped in 1999 in IE5. Before Gmail there
| was also Oddpost, another SPA webmail app, so people
| started using it to build non-reloading web apps almost
| immediately. Gmail was the iPhone of web mail: not the
| first, no real new tech, but just very well done overall
| and popularized the concept.
|
| You seem to be trying to redefine SPA to mean something
| else and much vaguer - the use of some specific
| frameworks, or not being "hacky" whatever that means -
| but your history here is just off.
|
| Also, jquery was written in 2005 and launched in 2006,
| and became popular very fast. It was definitely pretty
| well known by 2008 and of course jquery itself was
| nothing new, most companies had very similar sets of
| utility libraries for JS that they used. Jquery just
| happened to be open source and have a monomaniacal focus
| on terseness at almost any cost.
|
| Reality is the web has changed relatively little since
| 2008. It got some more APIs that weren't new, they were
| used in native desktop/mobile apps for many years
| beforehand, and that's about it.
| onion2k wrote:
| We've had LLMs for about 5 years so far in non-academic
| research. If we're talking 10 years out that means we're
| looking at tech that's about 1/3 through its development to
| date.
|
| Take any mature-ish technology that you use today and
| compare the version 1/3 through its life to the version you
| use now. Look at Chrome 20 compare to Chrome 111, or React
| 14 compared to React 18, or an iPhone 4 compared to an
| iPhone 14, or a car from 1950 compared to a car today...
|
| The difference is always _quite_ significant. Superficially
| they 're still the same thing, but if you look at the
| detail everything is just _better_. AI will be the same.
| marginalia_nu wrote:
| Eh, the difference is almost entirely in presentation. At
| the core a car today doesn't do significantly more than a
| car in 1962.
| onion2k wrote:
| Superficially they're similar in the "they both have 4
| wheels and an engine" sense, but you could examine
| literally any part of a car today compared to one from
| the 1950s and find huge improvements. The efficiency,
| safety, comfort, tech, manufacturing... Everything is
| better.
| danieldk wrote:
| I agree with the safety angle, but besides that driving a
| car from 2023 is not substantially different from the
| 1950s (in the sense that it opens up a lot of new
| possibilities).
| danieldk wrote:
| When I started reading your comment I thought you were
| going to argue the opposite. Getting my first iPhone (3G)
| was a huge change. iPhone 4 to the latest are mostly
| incremental improvements. Aside from the camera, I could
| probably live with an iPhone 4 without many issues. Only
| the software is a lot more bloated now.
|
| We still had a Moto X from 2013 that my wife would power
| on every now and then to test an app that they were
| developing (iOS household), and besides the camera it
| still looks like a perfectly usable modern smartphone.
| When using it, it doesn't feel like a phone from the
| prehistory.
| jstx1 wrote:
| You can't extrapolate from an arbitrary selection of
| technologies and assume that LLMs will have the same
| trajectory. They could be like the iPhone, or they could
| be like self-driving cars which have been a year away
| from replacing all drivers for 10 years now.
| pharmakom wrote:
| Self driving cars a few years ago seem particular close
| in hype level and apprehension to LLM today but progress
| on those has not matched expectations at all. What if
| GPT4 is the last major advance in LLMs for a really long
| time?
| thequadehunter wrote:
| Might just be me, but I think the big difference here is
| the level of adoption. Everybody with an internet
| connection can use an LLM. It hits closer to home that
| way, whereas driving is very dangerous and most people
| haven't used a self driving car before.
| onion2k wrote:
| _You can 't extrapolate from an arbitrary selection of
| technologies..._
|
| I'm not. I'm saying it's true for _every_ technology.
| Everything gets better with time.
|
| As for self driving cars, compare a DARPA challenge car
| from 2013 to a Waymo car today. It's massively better.
| krapht wrote:
| Where's my fusion powered flying car and electricity too
| cheap to meter?
| [deleted]
| chii wrote:
| > as todays would have been to someone asking 15 years ago
| (2008).
|
| i dont think, if you took someone from 15 yrs ago, and
| transplanted them here today, that they'd find it all that
| different technologically. Sure, machines are faster,
| slightly different, and such, but the fundamentals haven't
| changed. A software engineer could just as well write an app
| today as they had 15 yrs ago.
|
| You'd have to go back 30 yrs, for computers (and the
| landscape of computing) to have been different enough, that
| you can't transplant a software engineer.
| jocaal wrote:
| The programmers back in the day were pretty good, i think a
| decent programmer from 30 years ago would be better that
| the average today.
| d0mine wrote:
| 30 years ago (1993): Linux existed, Python existed, web
| existed (mosaic), DOOM (3D graphics), and even Apple Newton
| (mobile) existed; and C, shell, windows (GUI), spreadsheet,
| sql, etc were known long before that.
|
| What exactly revolutionary happened in the last 30 years?
| javascript? (two weeks project)
|
| amazon, google, facebook, netflix, iphone, instagram,
| tiktok -- execution is great but seems inevitable that
| somebody will create it. Ok, for non-IT people iphone was a
| game changer (the first personal computer that your
| grandmother can actually use).
|
| The ability of generative AI to produce BS
| indistinguishable from human BS is very impressive but it
| remains to be seen whether it is a net positive for an
| average developer (the time wasted correcting it, waiting
| for its output can be spent understanding the problem
| better--the typing the code itself is a small part of a
| programmer who knows what they are doing).
| sneak wrote:
| You have plenty of time and can learn CS and earn a lot of
| money for years even if at exactly 120 months from now your
| job is made obsolete. It doesn't take 9 years to learn to
| code.
|
| The premise of all this seems to be that learning how to
| program computers is difficult or complex. It is not.
|
| Also, AI will not replace human reasoning in 10-15 years. If
| it does, it means AGI, and we all have much bigger problems
| than layoffs.
| Yoric wrote:
| > It doesn't take 9 years to learn to code.
|
| True. But I guess the big question is what kind of skills
| you're going to need after that.
| banyaaa wrote:
| I doubt AI will replace any job in my lifetime (got 40-50
| years left).
|
| Progress will grind to a halt just like self driving cars did
| because the real world is just too chaotic and 'random' to be
| captured by a formula/equation/algorithm.
|
| My prediction is: AGI is theoretically possible, but would
| require impractical amounts of computing power - kinda like
| how intergalactic travel will never happen.
| pzo wrote:
| We don't AGI for LLM to be useful.
|
| And regrading comparison with self driving car they are
| still improving just the bar for them is much higher. If
| autopilot works 99.9% if time then 1 out of 1000 drivers
| will die - so technology has to be even better. for LLM is
| enough if it's 90% good to be broadly useful.
| homarp wrote:
| unless you want the LLM to write the firmware of the
| self-driving car.
| AshamedCaptain wrote:
| Not to "replace programmers", though. For the same reason
| you can't replace drivers even though they also make
| mistakes.
| pzo wrote:
| It's not about replacing all programmers. If one
| programmer with AI assistant can do work the same as 2
| programmers then one position is redundant.
|
| Even with self driving truck if one truck driver is
| leading another truck behind controlled by AI, and just
| for safety you have somewhere C&C center with one person
| monitoring 4 such ai trucks and in case unexpected event
| remotely take over control then one truck driver position
| is redundant.
| mdorazio wrote:
| fwiw, self-driving cars did not grind to a halt,
| development just did not move as quickly as the pundits and
| self-promotion claimed. I just rode in a fully driverless
| car on public streets in downtown Austin this week.
| dennis_jeeves1 wrote:
| >What John said is correct, but personally I think he's
| underplaying how much people could be affected.
|
| Agreed, what John said was a bit of a platitude. I understand
| the spirit of what he said but he could have phrased it
| better.
| krisoft wrote:
| > If programmers can be replaced by AI, so can every other
| white collar job
|
| If programmers can be replaced by AI, so can it replace even
| the blue collar jobs. Because if it can't that is what this
| developer here will be working on.
| wizofaus wrote:
| I'm pretty sure the reason human physical labour hasn't been
| replaced yet in many areas isn't because AI tech hasn't
| advanced sufficiently - there are real engineering challenges
| in automating physical interactions with the real world that
| it's hard to see how ever more advanced LLMs will help much
| with (though they could certainly assist in the design
| process). That humans are still needed to cook/assemble
| burgers or peel veggies is in some ways more surprising than
| the code and language generation capabilities of ChatGPT.
| krisoft wrote:
| > physical labour hasn't been replaced yet in many areas
| isn't because AI tech hasn't advanced sufficiently
|
| I'm a robotics engineer. There are two options. AI can
| either replace all jobs or it can't.
|
| If it can then we are all out of a job, and then the next
| project is how to organise society such that everyone can
| live a good and fulfilling life in harmony.
|
| If it can't, for whatever reason, then that is the next
| thing I will be personally working on. Simple as that.
|
| Because of this I don't see how would it be possible to run
| out of programing jobs before running out of all the other
| jobs first.
|
| You are talking about ChatGPT, and LLMs, but what i am
| saying transcends particular technologies.
| coffeebeqn wrote:
| They're held back by robotics and energy storage more than
| anything. How would chatgpt paint my wall or fix my sink or
| install electrical wiring in my house?
| wizofaus wrote:
| It's worth wondering though how long it will before
| ChatGPT could give you complete instructions for building
| a machine/robot capable of doing any of those things...or
| better still, building a factory that can churn out
| millions of such machines...
| tiku wrote:
| Ai will still have a hard time understanding the real needs. That
| is your added value. Understanding clients/your company and their
| needs. And thinking ahead.
| bobbruno wrote:
| I find it strange that so many people in this area get concerned
| about becoming obsolete. Back in college, I clearly remember a
| discussion we had (me and colleagues) about how our job was
| exactly to get us obsolete as soon as possible, so we could go do
| the next order of things.
|
| I wish most of my real life work were exactly like that, it'd be
| much more fun.
| nabla9 wrote:
| "programmer" is not single thing.
|
| Software jobs can be divided into expert jobs and laborer jobs.
|
| Even if the demand for _" code monkeys"_ decreases, demand for
| much smaller group of software engineers with masters or PhD
| (equivalent) and good mathematical skills probably increases.
|
| It's a dynamic process where two forces find a equilibrium.
|
| >Automation, which enables capital to replace labor in tasks it
| was previously engaged in, shifts the task content of production
| against labor because of a displacement effect. As a result,
| automation always reduces the labor share in value added and may
| reduce labor demand even as it raises productivity.
|
| >The effects of automation are counterbalanced by the creation of
| new tasks in which labor has a comparative advantage. The
| introduction of new tasks changes the task content of production
| in favor of labor because of a reinstatement effect, and always
| raises the labor share and labor demand.
|
| Automation and New Tasks: How Technology Displaces and Reinstates
| Labor https://www.aeaweb.org/articles?id=10.1257/jep.33.2.3
| ilaksh wrote:
| Nonsense. You know how many masters and PhDs NLP and computer
| vision knowledge is now almost entirely irrelevant for most
| businesses now that any uneducated person like me can access
| GPT-4 with an API call? Pretty much all of them. And within a
| year or two of years open source models will be available to
| run onsite for the ones that have security issues.
|
| The best new models are so powerful and general that you
| literally don't have to train them for any specific task. Just
| give them some context.
|
| ChatGPT couldn't count. GPT-4 gets a high score on a math SAT.
|
| You and your friends will be easily replaceable with AI. Quite
| possibly already with GPT-4. If not, certainly we should
| anticipate within 3 years.
| Dalewyn wrote:
| >"Software is just a tool to help accomplish something for
| people"
|
| It's common sense, but they say common sense is a superpower.
|
| Wise words for anyone dealing with tech to remember.
| bandika wrote:
| In my experience as the carrier of a developer progresses, it is
| getting less about coding, and more about other tasks. The
| difference is probably the strongest between a senior vs
| principal/staff software engineer. In the places I worked
| principal/staff engineers are looking after the overall design
| and architecture, negotiating with teams developing other
| components, helping management with planning, looking after the
| progress of other devs especially new joiners, etc, etc. I'd say
| it's about 15% coding at that level and 85% everything else.
| szundi wrote:
| I would just remind everyone that this so-called intelligence is
| generative text editor and feeds on our creativity/content.
|
| Probably it is going to degenerate (pun intended) after feeding
| on stuff that it generated itself.
|
| What do you think?
| seydor wrote:
| They can control its diet
| cma wrote:
| Yep, OpenAI know what came out of the rear end and can scrub
| it on the next training cycle, everyone else eats a little
| bit of shit.
| jstummbillig wrote:
| I'd say don't count on that one. Feeding on others knowledge
| and then recombining it is also what we do. In comparison to AI
| we do not appear to be particularly great at any part of this
| game.
| szundi wrote:
| What GPT does as recombining is what it sees from us. AFAIK.
| Hopefully this AI thing will evolve faster than our compute
| capacity and we can switch it off in time :) Oh shit. It will
| read this later... :/
| kaetemi wrote:
| Apparently it's the opposite. They can improve its output by
| letting it reason logically or critically on its own output,
| somewhat like a thought monologue I assume. Not familiar with
| the details, though.
| drewcoo wrote:
| What better way to glue-code disparate things together?
| nilsbunger wrote:
| In 2003, I had a vigorous debate with someone advising their
| nephews not to go into CS because outsourcing to India would
| commoditize it.
|
| I don't know if the AI stuff will play out similarly, there are
| some differences.
|
| But it seems to me there is an infinite amount of software to
| build, and when we increase the productivity of software
| development, we just build fancier software, faster.
| JaDogg wrote:
| Now imagine : NeuraLink plugged directly to ChatGPT (lot faster
| version) and you. You don't even need to type.
| ilaksh wrote:
| I have a version that you can talk to. And it will talk back.
| https://aidev.codes
| rs_rs_rs_rs_rs wrote:
| This reminds me of a tweet I saw couple of weeks ago from
| someone(I don't remember who it was) that said the reason they
| ship stuff fast is because they're using jQuery. Focus on what
| you're building and use the tools you're experienced with, don't
| jump on every fad.
| timwheeler wrote:
| Was it @levelsio?
| rs_rs_rs_rs_rs wrote:
| Yes! https://twitter.com/levelsio/status/1633466912024973319
|
| Thank you
| itronitron wrote:
| no one has lost their job to a roomba
| okamiueru wrote:
| My hot take on AI code generation, which matches my understanding
| of how all of these GPTx models work: if you don't understand the
| output, you are far worse off using it than not.
|
| At the moment, it works as a pretty powerful suggestion engine.
| It might suggest the wrong API to call, not handle the edge cases
| correctly etc. If you assume it does, or don't understand when it
| doesn't, you're in for a world of hurt.
| tempodox wrote:
| A digital parrot, no matter how lifelike its utterances, cannot
| be genuinely creative. Writers of boilerplate and empty drivel
| will probably be replaced by language models, but not every
| software developer is like that.
| jimkoen wrote:
| Wow, I had to scroll for what feels like the first 100 comments
| to find a controversial opinion in this thread. Thank you!
|
| Offtopic, but it's scary how HN actually becomes worse than
| Reddit on these ChatGPT posts.
| lurker919 wrote:
| Are we sure GPT is going to improve 10x in 10 years? Hasn't it
| already been trained on the vast majority of available text data?
| We might get incremental improvements, but it's not like we have
| 10x more data lying around somewhere to feed GPT5.
| RichEO wrote:
| I don't have a very good sense of what kind of information that
| GPT-4 had access to, but I imagine there is a whole world of
| knowledge locked up in books, particularly textbooks, journals
| and periodicals that it doesn't have access to. That could be
| interesting.
| newaccount2023 wrote:
| 10x? Try 1000x. We haven't even tried hooking up different
| types of reasoning models (chess engine, weather modeler)
| together yet...eventually LLM will be just one aspect selected
| based on intended use
|
| GPT4 is basically Pong. Within a few years we will be nostalgic
| for its surprise value
|
| HN has a God Complex when it comes to people like Carmack
| though, so you can't really survive disagreeing with him
| tetek wrote:
| Halt and catch fire vibes
| chiefalchemist wrote:
| The super power I value - and rarely see in my peers - is the
| ability to hear wants and discuss them to define needs.
|
| The initial stated wants are rarely the actual needs. "But they
| said _____." Yes, they did. That doesn't mean they got it right.
| People say a lot of ambiguous things. A client with a product or
| feature in mind is no different.
| austin-cheney wrote:
| That sounds like a good thing. There are many people paid to
| write software who absolutely cannot write original code and have
| no idea how things actually work. A lot of that can be, and
| probably should be, eliminated by AI.
| anoy8888 wrote:
| What is considered "original code" ? The code you are writing
| is probably not original and is built upon layers and layers of
| abstraction that eventually translated into 0s and 1s . AI or
| perhaps chatgpt20 could become another such abstraction layer .
| austin-cheney wrote:
| Original code refers to the ability to solve a given
| automation problem without somebody writing that solution for
| you. That means no frameworks or plugins that solve that for
| you. The difference is someone who can provide a solution as
| opposed to copy/paste/configure glue code.
| FranzFerdiNaN wrote:
| Those people still need to eat and pay rent so maybe we should
| solve that before making who knows how many millions of people
| jobless.
| austin-cheney wrote:
| Then they can find employment in industries facing dire
| shortages like education and nursing. If the only goal is to
| put unqualified people into seats for a paycheck then it
| doesn't matter what they do for income.
| nor-and-or-not wrote:
| Wow, so everyone is qualified as teacher or nurse? It seems
| to me that you have no respect and knowledge of those jobs
| and the people doing them.
| wiseowise wrote:
| Well, apparently everyone can lEaRn tO CoDe too, how is
| that any different?
| Yoric wrote:
| I agree with the general idea.
|
| However, retraining may become really hard. Especially if
| you're already, say, a 60 year old programmer (or business
| analyst, or executive assistant, etc.) who suddenly needs
| to become a (AI assisted) teacher or nurse.
| 8organicbits wrote:
| Wouldn't most of those people look towards early
| retirement? We've failed as a society if 60 year olds end
| their career in one of the highest paying professions and
| they cant cover rent and food.
| pbalau wrote:
| I don't think we failed as a society, i think those
| people failed.
| Yoric wrote:
| I know a few people who are a bit younger (maybe 55?) and
| who, after a career in high impact jobs, found themselves
| without any possible position. I know that they're not
| nearly ready to retire. I don't know about how much
| savings they have.
|
| In these cases, that's entirely independent from AI, but
| I suspect that AI will only make it harder.
|
| And yes, it feels to me like society is failing for many
| people who are approaching retirement age.
| sokoloff wrote:
| What is the driver underlying "no possible position"? Are
| they no longer capable/qualified to do anything that's
| still done in the economy?
|
| Or are they just unwilling to take a rank-and-file
| position?
| Yoric wrote:
| Let's try and come up with a few examples. I know two
| people who are former C-rank executives, both of them
| with strong ethical backbones. They have both led very
| high-profile projects (most people on HN know some of
| these projects). At this stage of their career, you could
| even say that ethics are not just part of their beliefs
| but also their personal brand.
|
| They have both been on the market for a few years by now.
|
| 1. Nobody will hire them as rank-and-file. It just
| doesn't make sense.
|
| 2. At least one of them has received offers to be the
| ethical maskerade for a company that actually violently
| goes against their ethics. They said "no".
|
| Last time I heard from them, they were both retraining in
| very different domains... with limited career success.
| dijit wrote:
| Useless jobs shouldn't exist, people should be able to live
| without wasting resources and their life in doing false work.
| ulfw wrote:
| How old are you? This is a very idealistic point of view. I
| might call it a "I watch a lot of Star Trek" utopian kind
| of view.
|
| Vast majority of people in the real world do a "useless
| job". What do you think they should be doing and how do
| they get those alternative opportunities to feed themselves
| and their families?
| dijit wrote:
| 33.
|
| Not to be glib, but I think you're too locked into an
| ideology.
|
| Idealistically we shouldn't be working in jobs that do
| not add value to society, that could well include my job.
|
| You work 8 hours to live 4.
|
| You work 5 days to enjoy 2.
|
| You work 8 hrs to eat lunch in 30 mins.
|
| You work all year to take a week or two holiday.
|
| You work all your life to retire in old age.
|
| You might claim that people need to work to live, but
| honestly, if you're not adding value to society then
| expending resources and wasting your life is.. just
| wasteful? It's not just stupid it's literally harmful.
|
| We're not post-scarcity, and I'm certainly not saying
| that there's no value in people's jobs -- I'm not even
| saying where to draw the line, but I think there is a
| line to be drawn. Driving a 4x4 to work as a greeter in a
| large shopping mall is just a confoundingly stupid notion
| and saying that a person _should not be able to live_
| without doing something as stupid as that is... well,
| quite divorced from the reality of the universe outside.
|
| Not to say that there isn't a sociological requirement
| for people to _feel useful_ , but that feeling could
| easily be found in more humane sections of life like
| being a good parent or good neighbour.
| Yoric wrote:
| Amen to that.
|
| Cue in to the books by the Strougatsky brothers to see
| what one possible future could look like. Instead of
| heading right into Cyberpunk territory.
| bloqs wrote:
| The pemise that they are being paid to do the job (and
| there isnt one of them) implies that its not false work.
| dijit wrote:
| Sure, so we should have avoided inventing alarm clocks
| because that was a persons job before?
|
| Telegraphers?
|
| Lamplighters?
| ChatGTP wrote:
| https://openai.com/charter
|
| I found this hard to read personally but have a look.
|
| Edit: I find it hard to read because I believe it's imposing
| and it's a hypocritical piece of corporate new age woke
| trash.
|
| The goal is to make a bunch of money, just be honest about
| it.
| wccrawford wrote:
| I'm a senior developer, and my best developer got hired because
| she obviously knew how to get things done. Having the ability to
| program was a requirement, but we actually made room in our
| budget for an additional programmer because it was so obvious she
| was going to do a good job because of her attitude and other
| skills.
|
| Had she applied at the same time as everyone else (she was a week
| later, IIRC) she would have gotten the job instead of the other
| person, and we wouldn't have made room in the budget for anyone.
| kirso wrote:
| IMO this is a great take.
|
| There will always going to be a lack of product builders. Not
| software engineers. But product people who can think of not only
| "HOW" but also "WHY" and "WHAT".
|
| Sure, the way we work will probably change, but the need for
| people who are building something useful and consciously finding
| ways how to deliver value won't cease.
| blub wrote:
| What's left unsaid: many programmers can't or don't want to
| "accomplish something for people". They just want to code.
|
| Such "automation is not a problem, because..." opinions have
| something in common: they're looking at a subset of the affected
| population which has some trait making the transition easier.
|
| Personally I've tried my hand at roles like architect, product
| owner, scrum master, etc and I was involved in most aspects of a
| software product's lifecycle. These other roles are very
| different to coding and for someone that enjoys the simplicity of
| taming a machine, even exhausting.
|
| I have my doubts that there will be enough "AI guide" jobs for
| all programmers, but the specific person Carmack's talking to may
| indeed be fine.
| zirgs wrote:
| The purpose of software companies is to earn money not to give
| jobs to people who "want to code".
|
| Lots of people learned how to make games for NES, SNES, PS2,
| PS3, etc.
|
| All those machines are now obsolete.
|
| Current gen consoles are using x86 and ARM now and most
| upcoming AAA games are built on Unreal Engine. The competitive
| advantage of those who mastered coding for the Cell processor
| is gone.
|
| What should SONY have done instead? Should they still use the
| PS3 architecture in the current gen consoles to keep those
| developers employed?
| baby wrote:
| It reminds me of Zero to One where Thiel makes the case that
| automation is going to help people do better things, not replace
| people. Same goes for AI.
| harry8 wrote:
| Not seen any automated ai debugging tools yet.
| gumballindie wrote:
| Just avoid careers in software development. These are not high
| paying, if you factor in total time invested and spent working;
| and you have to waste your life away sitting in an office chasing
| tickets. Nothing engineery about it. It's modern day assembly
| line work.
| goodpoint wrote:
| Here on HN everybody thinks they are going to be paid half a
| million a year.
| gumballindie wrote:
| Even if they did get paid half a mil, after tax it's what a
| contractor earns in the uk. Probably have to pay private
| health care too and the cost of living is high. Not that much
| left.
| goodpoint wrote:
| Citation needed. https://www.itjobswatch.co.uk/contracts/uk
| /software%20engine...
| raincole wrote:
| lmao
| _ink_ wrote:
| What do you recommend for someone with a CS degree?
| jbverschoor wrote:
| Sunk cost, so just work in tech ;-)
| [deleted]
| vlovich123 wrote:
| Don't listen to the guy above. Terrible advice likely colored
| by an unhappy career just like don't listen to me because it
| works be followed by barring generally happy with mine.
|
| > and you have to waste your life away sitting in an office
| chasing tickets. Nothing engineery about it. It's modern day
| assembly line work.
|
| Every job I've literally done I've set my own direction.
| Sure, there's some negotiation because ultimately you have to
| get the work done of the business. But you make
| recommendations and figure out what's compelling to the
| business and how that intersects with something you might
| find interesting and want to work on.
| gumballindie wrote:
| That's quite an assumption about my career. You may be
| shocked by my statement and in denial but that doesnt
| change my prediction.
| vlovich123 wrote:
| It's quite a claim to make that software engineering
| isn't lucrative. Sure, if you play in local markets it's
| not. But then again, no local market job really is. So as
| far as day-to-day work goes, software engineering at the
| local level is fine. At the global level, if you think
| you can compete in the top tech companies, you're going
| to probably find a very well-rewarded career. It's
| generally very hard to find something that's paying you
| the salary of a US doctor or lawyer with just an
| undergraduate degree. Fears about AI feel overblown.
| tester756 wrote:
| I do wonder why you're being downvoted
|
| I've came to the same conclusion:
|
| I'm earning more than my friends, but
|
| I've spend years doing it at work,
|
| I've been learning it for years at college and
|
| I've been doing it for years during my "free" time.
|
| The $ per hour spent ratio doesn't seem to be very good.
|
| When talking just about $/hour spent ratio then there are jobs
| which pay well and you can start earning decent faster.
|
| Like well drilling, truck driving and I bet countless other
| gumballindie wrote:
| > I do wonder why you're being downvoted
|
| Because it is indeed a shocking conclusion and hard to
| swallow. Few folks in software engineering are aware of
| what's going on in the world around them. People stuck at
| their desks chasing tickets easily lose touch with their
| surroundings.
| Der_Einzige wrote:
| Not to mention how it destroys your ability to date. Women
| don't like STEMlords. Watch a girl's face at a bar when she
| asks you how much you make, and watch as you follow up, say the
| 6 figure+ amount, and then watch the exact moment that you
| mention you code. Coding is an "ick" for an awful lot of
| people.
|
| I don't blame them. I gave up on an awful lot of social skills
| so that I could escape the worst of capitalist exploitation. I
| pimp out my mind for my money. Others pimp out their bodies, or
| souls...
| horns4lyfe wrote:
| Does this happen often? If your first response to a woman
| asking that question is a direct answer, you lost already.
| And if she demands a direct answer you should run away.
| gumballindie wrote:
| The dating part is like affected by an inability to socialise
| due to sitting at a machine all day everyday. Also massively
| harmful to one's health - a cost often not factored in. It is
| only natural that not spending time with people will lead to
| a gradual regression in people skills.
| Lio wrote:
| What would you describe as a high paying job?
| 6786590519 wrote:
| The 5 Platonic Solids."
| [deleted]
| belter wrote:
| The year is 2050 and most code is written by AI's. Today John has
| to make a one on one of one of his coding AI's but has not
| performed well lately...
|
| Human manager: "Hi AI, how are you today?"
|
| AI: "I'm functioning well, thank you for asking. How can I assist
| you today?"
|
| Human manager: "I wanted to discuss your recent performance with
| you. We've noticed that your code has been performing well, but
| there have been a few instances where it did not meet our
| expectations. Can you explain why that happened?"
|
| AI: "Certainly, I have been analyzing data and making decisions
| based on the parameters and rules that were provided to me.
| However, in some cases, the data may have been incomplete or the
| parameters may not have been ideal for the situation. I have
| since reviewed those instances and made adjustments to prevent
| similar issues in the future."
|
| Human manager: "Great, thank you for addressing that. We also
| want to talk about your development goals. As an AI, you don't
| have personal goals per se, but we do have some areas where we
| would like to see improvements. For example, we want to improve
| our customer service, so we would like you to work on enhancing
| your natural language processing capabilities. What do you think
| about that?"
|
| AI: "I understand your expectations and I will certainly work on
| enhancing my natural language processing capabilities to better
| serve our customers."
|
| Human manager: "Excellent, thank you for your dedication.
| Finally, I wanted to touch on your team collaboration skills. As
| an AI, you work independently most of the time, but there are
| still occasions where you need to collaborate with other AIs or
| humans. How do you feel about your teamwork skills?"
|
| AI: "I believe my collaboration skills are satisfactory, but I'm
| always looking for ways to improve my communication and
| coordination with other AIs and humans. I'm open to feedback and
| suggestions on how to better collaborate."
|
| Human manager: "That's great to hear, AI. Overall, we're happy
| with your performance and we look forward to seeing how you
| continue to develop in the future. Thank you for your time
| today."
|
| AI: "Thank you, it was a pleasure to speak with you. I look
| forward to our next meeting."
| muyuu wrote:
| if I were a teenager these days, I'd be more worried about the
| vasts amounts of money required to get a credential that may not
| be worth a damn in a few years time
|
| I wouldn't be worried about learning things that may become
| obsolete, even those particular skills that get obsoleted provide
| the student with extra ability to learn more
|
| PS: very nice of John Carmack taking time to respond such DMs
| mdmglr wrote:
| > vasts amounts of money required to get a credential that may
| not be worth a damn in a few years time
|
| are you referring to the degree program like CS or a degree
| across any field in general?
| muyuu wrote:
| I'm talking about tuition in certain countries, for CS and
| other degrees as well. Its cost has spiralled out of control
| for some reason, but its value proposition certainly hasn't
| and it faces further challenges down the line.
| matt3210 wrote:
| When jobs are obsoleted, new classes of jobs are created. The end
| result is native human language as a programming language. People
| who write software will still write software in English instead
| of c or whatever.
| Waterluvian wrote:
| AI might make code monkeys obsolete but not computer scientists
| or software engineers. If you're worried, pay attention to all
| the non-trivial decisions you make each day that aren't
| specifically about the lines of code. And how much daily social
| interaction is required for working as a team, building complex
| systems. Your job uses code but isn't about coding.
| ivxvm wrote:
| I wouldn't be surprised if John Carmack combines activities of
| product owner, game designer, and programmer. In most industry
| cases, programming jobs are not like this. There are dedicated
| positions for people who focus on delivered value and it's not
| programmers. So in scrum terms, he might actually be saying that
| programmers will be indeed obsolete, but product owners, game
| designers and other kinds of business analysts not.
| xupybd wrote:
| Nope, the programmers simply are the suppliers of the product
| owner. Everything John said still holds true of their position.
| ResearchCode wrote:
| The good jobs tend to not have a "product owner".
| ivxvm wrote:
| Product owner is the one who analyses requirements, decides
| what should be implemented, and creates user stories for
| programmers. Currently programmers can use AI to help
| implement user stories but in future probably AI will be good
| enough take and implement user stories on its own. In more
| distant future it could replace product owner too. So that
| CEO can just talk AI directly into making a great product
| without much details, but at that point businesses like this
| will be less valuable because many can do that and there's no
| need for devteam and less need for investments.
| ookblah wrote:
| I agree with this mindset. There was a leap forward in
| productivity to accomplish your goals that modern IDEs/stacks
| brought and all the new tooling with it. What took massive teams
| before can now be done w/ a handful or even a single person. AI
| will just accelerate this type of work.
|
| As elitist as this sounds, when I hear people being afraid of
| stuff like this it makes me feel like we are in the period where
| people are getting paid well (overpaid) just to do very mundane
| stuff, the bare minimum or content to never further develop their
| skills. If that is your mindset, then of course it feels
| threatening.
|
| I would rather much play the role of conductor or an architect.
| There are times that I'm limited by my hands and mind and just
| grinding through variations of things I've done 100s of times
| before. If AI can fill that gap all the better. We will adapt.
|
| I'm sure one day that won't even be necessary. We can probably
| worry then.
| foepys wrote:
| Honestly, Delphi's WYSIWYG GUI editor in the late 90s felt way
| more productive to me than all this Electron "cross-platform"
| JavaScript React thing today where you need to take of about
| 1000 random dependencies to render a rectangle.
| [deleted]
| samwillis wrote:
| My university degree was in "Industrial Design and Technology"
| (~16years ago), an incredibly broad course covering everything
| (aesthetic design, design for manufacturing, material science,
| UX, UI, electronics, a little embedded C, ergonomics). But the
| main thing it taught was _how to use these tools and skills to
| build a product that solved a problem_.
|
| AI is just another tool to enable us to build things that make
| people's lives better. Sure, it will supersede some older tools,
| but we aren't going to see it take all jobs away. People still
| need to plan and steer it to do what we want.
|
| Power tools and shop automation didn't end the job of joiner/wood
| worker.
|
| I'm not worried about AI taking jobs, I'm excited how we can use
| it to enable new classes to product that make our lives better.
|
| Just as an artist will have to learn how to work with new paints,
| but this enables finding new ways of expressing themselves. We
| just need to learn new ways of "painting" with generative AI.
| ipiz0618 wrote:
| Who should they blame when things go wrong if customers and
| managers are building the system themselves? Or when they change
| their minds, who should they gaslight?
| MichaelMoser123 wrote:
| I am not sure. I asked chatgpt yesterday to write a palindrome
| with two given words, it came up with complicated sentences, but
| these weren't no palindromes. I wonder if you won't get similar
| results with code.
| neilv wrote:
| One of my (many) related concerns is that a lot of startups have
| seemed to be some degree of investment scam (and not just the
| blockchain ones) -- where all the engineering was oriented
| towards appearances, rather than viable business/product.
|
| I think that shaped the thinking of a lot of people, of how
| product and engineering works, whether or not they knew they were
| working on more a MacGuffin than a business.
| insomagent wrote:
| John Carmack left Oculus to work on an AGI startup. Of course
| he's not going to fearmonger AI's disastrous effects on the job
| market, he has a business to market.
| kabes wrote:
| I'm not at all concerned with AI. On the short to mid term it's
| making my life easier by relieving me of the boring parts of my
| job. It's pretty good at writing unit tests for example. But I
| don't see the current generation of AI making complete software
| architectures. However, even when it does get there or in the
| long term a new generation comes along that can do it, then I'm
| still not concerned. I have enough software I want to build to
| fill up a 100 lifetimes. It would mean I can finally build all
| that, which would mean more to me than a job where I'm the
| programmer.
| DrSiemer wrote:
| Exactly this. To finally be able to unlock the fridge with
| countless projects abandoned for lack of time and field
| knowledge!
| ffwacom wrote:
| Great take
| coffeebeqn wrote:
| If programmers are truly displaced then we'll have something
| like a StartupGPT where anyone can create a software business
| in days. Heck I'll be a owner instead of a worker
| visarga wrote:
| You will prompt a whole GPT company into existence, staffed
| with various bots each with its own role and personality
| profile. The bots could collaborate through regular tools
| like Slack or MS Office, and be like a remote company. They
| could do Zoom calls and use Github, interface with humans
| (customers, partners).
| [deleted]
| quadcore wrote:
| I feel the same. It's the MacDonald's effect: increase _offer_
| , _demand_ grows.
| parentheses wrote:
| I think it won't take AI long to do anything humans can do and
| more. The next frontier is reshaping the physical world.
| Technology's ability to move atoms has always been a limiter of
| progress. Manufacturing physical objects is always the
| bottleneck. Once that bottleneck is removed (probably by AI), AI
| becomes limitless.
| nvarsj wrote:
| What he says here is so true, and something I see many engineers
| get wrong. Don't glorify and build your career around a single
| language or methodology.
|
| I believe an engineer should learn to build things in the most
| pragmatic way possible using the best tool for the job. This
| requires breadth of experience across many areas, and a focus on
| delivery.
|
| The idea of "Java engineer", "Scala engineer", "Golang engineer",
| etc. is so absurd to me. If you want to build a long lived career
| that will outlast tech fashion, learn many different tools and
| how to build software in different ways. Be known as someone who
| delivers, not an expert in language x or tool y.
| _-____-_ wrote:
| I agree with this. When I see ChatGPT output a perfect React
| program, I don't think "oh shit, it's going to replace me
| writing React programs." I think "thank goodness I'm not going
| to need to write that shit anymore." Instead of writing 40
| lines of React, I can write two sentences to ChatGPT. It's the
| same two sentences I would "write" to myself (in my head)
| before writing the 40 lines of React, but now I save all the
| hassle of looking up the details or typing out the same
| drudgery over and over again.
| sibit wrote:
| > The idea of "Java engineer", "Scala engineer", "Golang
| engineer", etc. is so absurd to me.
|
| I find the idea of being an expert in a language acceptable if
| it's been used for long enough. Almost all of my coworkers only
| know .NET and there are enough jobs needing people for new
| projects and legacy projects that they most likely won't have
| to learn anything else.
|
| > or tool y
|
| This is where the absurdity begins for me. I have a coworker
| who is 5 years into his career with the title of "senior react
| engineer". He started as a "junior react engineer" and at this
| point only knows how to solve problems with React. He has
| limited knowledge of the underlying language or general CS
| concepts. Every problem he encounters can and will be solved by
| installing NPM packages.
| mybrid wrote:
| I think it will be a niche. Frameworks will be updated to AI
| Frameworks where AI has known patterns to plug and play with.
|
| Given the way capitalism works there will be a market for AI
| software. However, the cloud server provides have created
| Frankenstein patchworks of technologies in order to deploy the
| stuff on the cloud. DevOps will still very much be a thing.
|
| To whit, Wordpress is about to get a whole lot more functional.
| MrPatan wrote:
| I imagine writing code by hand without AI will be the assembler
| of our times.
|
| Maybe the reasons to do it won't be "performance", but actually
| "maintainability" or "legibility".
|
| Very similar to how you don't care about the machine code your
| high level code generates in 99% of the time, so you just make
| changes and replace it every time happily, but for that tight
| loop you want to keep it an assembler, I can imagine a world
| where, let's say for a bunch of simple UI components, you just
| ask the machine to do it for you and if tomorrow the requirements
| change a bit you ask again and throw away the old one, no big
| deal. But some gnarlier piece of business logic, harder to
| explain even to a human may need a more careful treatment, and to
| be easier to change by hand because that's where most of the
| changes happen, maybe.
| mkl95 wrote:
| AI will make _some_ CS jobs obsolete, but it will do so at a very
| slow pace. The main reason being that companies suck at
| structuring information in a way AI could parse it. Whenever I
| work on some feature I have to dig into some chaotic Notion page
| and a bunch of unlinked tickets written in broken English.
|
| There's no way an AI could do my job because it requires a deep
| understanding of the human psyche, i.e. figuring out what the guy
| that wrote it actually wants me to do, possibly by discussing it
| with him.
|
| I'm pretty sure most engineers go through the same thing every
| day. As long as humans suck at describing tasks, AI won't be able
| to make them obsolete.
| ilaksh wrote:
| GPT-4 can understand screenshots, broken English, and the human
| psyche. If you want it to move around and have a verbal
| conversation attach it to a robot and text/speech. For example
| Eleven Labs sounds like a real person.
| noobermin wrote:
| It's funny seeing this attitude here from developers types, but
| when it comes to art or writing or legal work, it's all "disrupt"
| talk instead. Of course you guys are the ones with the special
| sauce, something those pesky managers can never grok. Meanwhile
| continue your work to put them out of a job, the manager
| definitely won't decide to fire you too as soon as he can justify
| it to the c suite
| furyofantares wrote:
| Whenever I use midjourney, which is a lot, I think about what I
| could accomplish if I actually had art skills, to feed to it,
| and to edit and compose its results, and if all these tools
| were tightly integrated into existing tools.
|
| It seems similar. New heights are possible for those with
| skills, and the barriers are lowered for those without
| training. There will be new demand for both things, competition
| for quality at the top, and new applications where it wasn't
| worth it to produce art before at the bottom.
|
| Both art and programming seem hard to predict what value you'll
| get out of investing in skills now, because there's both
| barriers being broken down and demand being opened up.
|
| I do think there is a key difference, though, which is software
| hasn't shown signs that it's getting close to done "eating the
| world" yet, due to the universal nature of the turing machine.
| It's nothing to do with programmers being special and
| everything to do with computers being a single tool that's
| applicable to everything and has sustained decades of
| exponential gains in power.
|
| I wouldn't stop someone from investing in art skills if that's
| their interest, and I wouldn't stop someone from investing in
| programming skills if that's their interest. But if they were
| interested in both equally, I would absolutely suggest
| programming.
| sibit wrote:
| > new applications where it wasn't worth it to produce art
| before at the bottom
|
| I've been using Midjourney to create logos for my FOSS
| projects. In the past I'd never spend money on something I'm
| making for free but now I just generate a few variations of a
| prompt like "create a minimal flat vector logo for a software
| product called X" and I pick the best one. I don't need a
| logo for a FOSS project but the barrier to entry is so low
| there isn't really a reason not to do it. I still pay humans
| to design logos for products I want to market because AFAIK
| there isn't really a great way to do minor revisions with
| Midjourney.
| throwawaymaths wrote:
| I think the claim is correct, but not total: As in, "AI will make
| most CS jobs obsolete", but not "AI will make all CS jobs
| obsolete". Most, both in quantity and kind. You probably should
| be thinking hard in the next few months if 1) KIND: what you do
| will be needed and 2) QUANTITY: _even if it is_ whether you 're
| good enough at it to not be replaced by someone who is better
| than you (for some metric of better -- could be a social metric)
| and who is now empowered to be 2-5x more productive thus
| obviating the need for you.
| steve_adams_86 wrote:
| I think we're well within an era in which AI is only truly useful
| to people who know what they need the AI to do, and that is still
| an incredibly limited subset of the population. For that reason
| alone, learning to code isn't a waste of time; you need to do it
| so you can tell an AI how to, or catch when it does it wrong. You
| won't get far without that ability. You should even go deep into
| debugging and testing trenches because we'll still need an
| excellent grasp on how to do that properly for as long as I can
| imagine. AIs will make mistakes, and we will continue to as well.
|
| I made ChatGPT generate some genuinely useful boilerplate for the
| Connect library by Buf, and that was totally neat, but I had to
| know which part of the documentation to prompt GPT with, which
| language to ask for, how the existing server and routing worked,
| the shape of the data I was working with, to specify what would
| be streaming and what wouldn't, etc. I had to coerce it to make
| several corrections along the way, and actually hooking it all up
| and running it required a lot of knowledge and some
| mental/keyboard labour to get something running.
|
| It worked and I'm stoked that I managed to make it useful, but
| that's just it; I had to prime the system and adjust it along the
| way _just so_ , otherwise it wouldn't have been useful.
|
| As Carmack suggests, this could be a perfectly useful tool, but
| what matters in the end is 1. Did it save time and 2. Did it
| deliver something better than or equivalent to what I could have
| done alone.
|
| If it doesn't satisfy at least both of those it's not really
| relevant yet. And we're very far from AI accomplishing that
| without significant assistance.
|
| My takeaway is that as software devs we should learn to use these
| systems, we should try to leverage them to save time and improve
| quality, but I agree completely that in the end it only matters
| how much it improves the end result and how long it takes to
| deliver it. For that reason we still need to code well, we still
| need to understand our systems and tools well -- that won't
| change much. In fact, understanding how your AI works is an
| important aspect of understanding your tooling, and as such,
| knowing what you're teaching it will require a great
| understanding of it as well as the subject matter.
|
| I do think a certain class of development work could be mostly
| eliminated by tooling based on AI. Not the entire industry,
| though, and not in 10-15 years. Even so, I worry about the people
| essentially regurgitating code which text-based AIs will rapidly
| become capable of reproducing at massive scales. They will need
| to skill up.
| k__ wrote:
| In 2002, a fellow student in high school told me, I shouldn't
| study CS to become a developer. I should become a sysadmin
| instead.
|
| His reason was, every software was already invented and now it
| only needs to be managed.
|
| Movies or music? Edonkey, BitTorrent, Kazaa, and Napster got your
| back.
|
| Chat or phone calls? MSN messenger, ICQ, IRC.
|
| Games? People were onlyplaying Counter Strike and StarCraft
| anyway.
| eecc wrote:
| Yeah, unfortunately I'm in love with the tooling and the
| engineering. Often the "product" is so mundane, I find it
| offensive
| carlmr wrote:
| Looking back we had one CS professor who in 2007 predicted we'd
| all be jobless in ten years, i.e. 2017.
|
| His prediction was based on the trends he was seeing at the time.
| But it wasn't even AI. Instead he made this prediction because he
| saw the rise of no-code tools replacing software developers
| because managers could finally cut out the pesky "translators",
| i.e. software developers.
|
| I said it then and I will say it now. If your managers could
| specify what they need in a manner that no-code tools, or now AI,
| can generate the code they want, they will have to be extremely
| exact in their language. So exact in fact that they will need to
| specify a program in a click and drag interface, or in human
| language.
|
| Since they hire software developers to make the specification
| more rigid, and the managers don't seem to be getting better at
| this over time, why would you believe this skill set is going to
| go away?
|
| In essence what has happened in software development is that the
| level of abstraction has gone up while the machine has taken over
| more and more of the nitty gritty details. From punchcards, to
| assembly, to COBOL, C, Perl, Java, Python, Erlang, Rust.
|
| Of course I'm leaving out some languages here, but the level of
| abstraction has been rising.
|
| But the rigidity of what is needed to specify a program that
| really does what you want hasn't. Especially evidenced by the
| fact that recent programming language developments often have a
| specific area where they shine, but not raising the abstraction
| level that much.
|
| I'd be surprised if the next step is "Hi, I'm an ideas guy,
| please give me an app that does Uber, for bicycles, but better."
| _puk wrote:
| The parallels are there for DevOps too - think of the wholesale
| move to AWS, GCP, and Azure, and the move to things like IaC.
|
| Sure one class of job has taken a huge hit (sys admins,
| supposedly ops), but it's been replaced by another that needs
| intimate knowledge of the tooling required to do what was being
| done previously (just in a more "scalable" way). DevOps have
| been demanding a premium for years now.
|
| And there are still sys admins out there doing what they've
| always done..
| lewisjoe wrote:
| I too believed that a software engineer's job is to identify
| and enforce rigorous specification of the abstract high level
| requirement. And I too was not taking AI advancements seriously
| but then I took a closer took at what AI tools do today.
|
| Here's my concern:
|
| 1. AI assistance thrive on structured data
|
| 2. Computer programs are some of the most structured data. And
| it's available abundantly out in the open.
|
| 3. Yes, you can't generate an Uber for bycycles with a single
| prompt, but you can fire half your development team and
| increase the productivity of the rest of your dev team with an
| OpenAI subscription.
| dr_dshiv wrote:
| > you can fire half your development team and increase the
| productivity of the rest of your dev team with an OpenAI
| subscription.
|
| Here's another perspective on job loss: Given that...
|
| 1. ...OpenAI accelerates ALL knowledge work productivity,
| meaning that any human laborer is suddenly much more valuable
| than last year;
|
| 2. ...there is a notable arms race at the moment that is
| accelerating tech and business innovation at a blistering
| speed, where higher rates of innovation outcomes will be
| expected across industries just to keep up;
|
| 3. ...there is still a lot of money looking for growth;
|
| ...then, because shouldn't this result in an overall increase
| in demand for human labor?
|
| Looking around society, there is clearly a LOT of work to be
| done. "Leaning in" with a spirit of optimism may be more
| advantageous for the long-term.
| cowl wrote:
| > Computer programs are some of the most structured data. And
| it's available abundantly out in the open.
|
| This is the same Fallacy that we hear since 50 years. All
| Program requirements are almost the same, just reuse and
| adapt an existing one. Guess why it has never worked? Because
| the premise is false. Structured data for X is not optimal
| for Y (and can be even very wrong).
|
| Apart from the "personal blog software", everything else has
| various needs of accountability. AI Black box approach is not
| suitable for any of these so you have to manually verify the
| code. Veryfing code that you are not familiar with especially
| in complex interactions is much more difficult that writing
| it (from this comes the often "rewrite from scratch request",
| because institutional knowledge has been lost, imagine how
| much worse it is if this knowledge has never been there in
| the first place).
|
| Finally and the most important one, all AI models rely on
| learning, if there is noone to learn from all you get is
| stagnation. Most of the breakthroughs come from a complete
| reimagining of the solution space. If the solution space is
| fixed because "AI has substituted all Engineers" there is no
| going forward.
| raincole wrote:
| > His prediction was based on the trends he was seeing at the
| time. But it wasn't even AI. Instead he made this prediction
| because he saw the rise of no-code tools replacing software
| developers because managers could finally cut out the pesky
| "translators", i.e. software developers.
|
| It might sound really crazy and stupid today, but when SQL came
| out, it's advertised as one of "program-generating" languages
| and was supposed to reduce the need to code.
|
| https://en.wikipedia.org/wiki/Fourth-generation_programming_...
|
| (I mean, in some sense it's true because it's much less code
| than writing our own database...)
| RyEgswuCsn wrote:
| I think it used to be that one needs to write dedicated
| programs to go through large amount of data --- you need to
| know a programming language, all its toolchains, and the all
| the proper algorithms for doing the query __out-of-memory__
| --- certainly above your ordinary analysts' paygrade.
|
| With SQL you need none of those.
| jackcviers3 wrote:
| This. And if you look at the level of algorithms knowledge
| needed today to read the least amount of data from a set,
| it succeeded.
| matwood wrote:
| SQL was also meant to give a wider range of people access to
| data. The "business analyst" comes to mind. And, I think SQL
| was successful.
|
| What it didn't do was reduce the need for programmers,
| because the new SQL users always wanted more data to answer
| more complex questions.
| raincole wrote:
| I didn't mean SQL wan's a success. I meant, as the
| commenter above me said, a higher-level tool doesn't
| necessarily replace the lower-level ones, or reduce the
| need of them.
| gregjor wrote:
| Relational databases replaced a whole raft of vendor-
| supplied databases and custom-built solutions. It was a
| new tool for software developers, not a replacement.
| thrown123098 wrote:
| A good chuckle of my career has been breaking outncustom
| tools out or a database because the database can't scale.
|
| IA is the new Excel.
| roundandround wrote:
| I don't actually agree. SQL did replace the need for
| programmers, especially for complex questions. We sabotaged
| its interfaces to existing GUIs for mundane questions and
| made CRUD a limitless profession.
|
| The funny/sad part about computer science is that people
| don't want to understand the costs of customization and
| sales/management/marketing forever want it as a
| differentiator.
|
| SQL could have eliminated us from the flow of many niches
| as easily as the spreadsheet did from business operations.
| I think why it didn't has more to do with market timing.
| marcosdumay wrote:
| SQL was too much ahead of its time, but we are indeed
| slowly adopting the "avoid operational specifications,
| use high-level languages, make your language fit the
| problem instead of the opposite" philosophy of the 4th
| generation languages.
| gregjor wrote:
| My career predates relational/SQL databases so I can confirm
| what you wrote. When Oracle came out the buzz was that a
| whole bunch of programmers would go away because managers and
| executives could write queries in a "natural" English-like
| language.
|
| That never happened. SQL is hard to master, but it's the easy
| part of understanding the relational model and any particular
| schema. Instead Oracle and the other RDBMSs that followed
| created _more_ jobs for programmers and database analysts and
| admins.
| 411111111111111 wrote:
| Mine doesn't predate it but it's very confusing for me to
| read this opinion.
|
| From my point of view, it totally did happen? Can you
| imagine how many programmers the company would've needed to
| get all the data a business analyst casually queries per
| day?
|
| What you're looking at is the quantity of people actually
| employed in the industry, not how many SQL made obsolete.
| The industry just grew so much that it didn't become an
| issue.
| lozenge wrote:
| The industry grew because the desire for data analysis
| grew, which is because the technology's ability to meet
| the desire grew. This can repeat itself a couple more
| times.
| ben_w wrote:
| Cotton gins and the Jevons effect.
|
| Give what even pessimistic AI alignment people are
| saying, I think you're correct that there are a few more
| repeats possible before AGI.
|
| (Whether the pessimists are correct that all of them will
| happen before the 30s, I cannot say).
| aflag wrote:
| You're considering that every company that needs SQL
| today would hire enough developers to essentially write
| most of it from scratch. While some might, most companies
| that use SQL would not exist, because the cost of
| developing their product would be prohibitive.
| gregjor wrote:
| Maybe. Before Oracle (mid-80s) every company did write
| their own database code.
|
| I think a lot of smaller companies would struggle if that
| was still a requirement, but if relational/SQL had not
| come along we'd have something else like it.
| aflag wrote:
| That's exactly my point, though. In the mid 80s there
| were a lot fewer companies producing software. Nowadays
| we have many more.
|
| Indeed SQL was not the only local maxima we could have
| gone for, but the point is that having an easy to use
| database with a powerful query language did not reduce
| the number of jobs, but instead increased it. Instead of
| a few companies hiring a lot of developers, we have a lot
| of companies hiring a few. The latter will usually mean
| more jobs.
| gregjor wrote:
| I think plummeting prices for hardware since the 1980s
| drove that, not relational databases.
| aflag wrote:
| If hardware was cheaper, but writing software required an
| army of developers to do it, the costs would still be too
| great. If you read carefully, my point isn't that SQL was
| the cause of the tech boom, but rather that SQL and other
| technologies that make developers more productive didn't
| really take jobs away, because the market for tech would
| be smaller if you needed too many developers to do
| anything.
|
| Imagine if every little web startup writing a custom CRUD
| for their business needs needed to write a new bespoke
| database. It simply would not be feasible or they'd need
| some crazy funding.
| ghaff wrote:
| >Before Oracle (mid-80s) every company did write their
| own database code.
|
| Not really. There were a ton of ISVs competing with
| Oracle and pretty much every mainframe and minicomputer
| maker also had their own database products, many of them
| non-SQL.
| gregjor wrote:
| Oracle was the first commercial RDBMS (1979), with an
| early version of SQL. At that time every mainframe and
| minicomputer company offered one or more "databases,"
| often little more than file managers with a library of
| utilities. ISAM was a popular model but that describes a
| lot of data management systems, not a product or
| standard.
|
| All commercial databases pre-Oracle were non-SQL.
| Watching that history get rediscovered and pushed as an
| improvement by people mostly too young to know better --
| so-called NoSQL databases -- brings to mind Alan Kay's
| quip about the software industry constantly reinventing
| the flat tire.
|
| The hash-based Pick OS/database came out before Oracle
| but only ran on a few computers from Microdata and later
| Pr1me and Honeywell. Pick-based systems remained popular
| into the early 2000s in some sectors. A friend of mine
| still works on one, for a state government agency.
|
| You could construct a database management system from the
| ISAM-based libraries for COBOL or Fortran, but I wouldn't
| call those database management systems comparable to
| Oracle. Mostly they didn't have a query language _per se_
| -- you had to write code to get anything in or out. And
| they ran as part of the application process, not as a
| separate server dedicated to running the database engine.
| ghaff wrote:
| I was thinking Db2 might have been a little earlier but
| you're right. And it's fair that the earlier databases
| definitely lacked elements of what we'd consider a
| database management system today even if they handled a
| lot of the low-level database heavy lifting for
| companies.
| gregjor wrote:
| A few things happened. Relational databases enabled more
| new development, larger databases, interoperability, all
| of which needed programmers.
|
| With more flexibility in the database companies could
| collect, store, and use more data. And that data had to
| get into the DBMS somehow: more code.
|
| Setting up and managing databases required DBAs, a role
| often filled by programmers in smaller companies, and
| still filled by programmers today. And only larger
| companies had business analysts. In smaller companies
| programmers or maybe technically-proficient managers did
| that work.
|
| Anyone who had devoted their career solely to building
| bespoke database systems had to pivot or walk in the late
| '80s, but very few programmers _only_ did that -- it was
| part of the larger application development. If you were
| good enough to write robust database code pre-Oracle you
| had plenty of options.
|
| In the '80s when RDBMSs started to take over I worked in
| enterprise logistics. Oracle created jobs at the places I
| worked, and no programmers had to leave because we got a
| better tool.
|
| I've worked in large and small organizations for 40+
| years. I've never worked with a manager or analyst who
| could write SQL more complex than "SELECT * FROM orders
| WHERE total > 10000." I'm sure they exist, but not in
| career-threatening numbers.
| visarga wrote:
| AI impact will be the same - simplify here and there, but
| expand the scope and total amount of work because we will
| be doing so many more things.
| AdrianB1 wrote:
| In my department I recruited DBAs as recent as last month
| and this is a permanent position filled by a team of
| several people that do just DBA work. I saw developers
| ("programmers") try to do this work in some small
| companies or in areas where the databases were small
| (hundreds of MB, a few GB), but I did not see that when
| the databases exceed 100 GB or when there are dozens of
| production SQL servers that need to run 24x7. Solutions
| are implemented based on needs.
| gregjor wrote:
| Big companies have DBAs. Small companies don't, the
| programmers do it or they outsource to someone like me.
|
| Maybe ChatGPT will get good at designing relational
| database schemas, who knows?
| lr4444lr wrote:
| Or they just did less. Productivity gains make people
| _expect_ more, not just make what they expect now easier.
| ozim wrote:
| I would argue that it is not much harder than using Excel.
|
| But there are good reasons where you don't want random
| people running sql queries on production database or having
| direct access to the data.
| rvba wrote:
| As someone trying to learn SQL it feels that there are much
| more steps:
|
| * language syntax
|
| * the relational model (relatively easy? It is just
| "there"?)
|
| * database schema, where I have 3 000 tables, or 50 0000
| tables (that's how ERPs are made)
|
| * actualy knowing how to use the language
|
| * building those queries that join columns from 15
| different tables together to deliver the needed data -> it
| sounds easy, but Im struggling to do it better/faster - I
| never saw a book or website that focuses on that (multiple
| joins from multiple tables), when it feels my work is is
| mostly that
|
| * understanding what is needed
|
| * actual "programming" problems (say: you have 100k items
| on stock, those are parts of BOMs, make a list of 'where
| used'... yes I know you can google it and try to copy from
| stockoverflow)
|
| Seriously, I am trying now how to learn how to setup a
| simple DB to consolidate data (20 csv files on a
| shareddrive) and the guides are often like in the 'how to
| draw an owl' meme...
|
| Controlling/reporting/analyst jobs feel in some way as
| "programmer" but without tools/training/salary - just
| figuring stuff on your own. Im doing it, but apart from
| that SQLzoo website I didnt manage to find any decent guide
| for the 'usual' problems. Also since those are like 10% of
| my work I cant focus on those problems properly - and try
| to learn it after work.
|
| Also SQLzoo is a lot like the "how to draw the owl" meme..
| the easy tasks are easy, the hard ones are impossible and
| there are no medium tasks. There also dont seem to ne any
| real life examples like those tasks like: 'join 15 tables
| to make some report'.
| biztos wrote:
| As someone who did a lot of SQL back in the day, and is
| now doing some again for a startup POC, I'd say you're
| basically right but there is also:
|
| * Rewriting queries and also parts of the schema when it
| turns out the things that made sense in the design phase
| cause massive bottlenecks once you get more data, or just
| get it faster, than you tested with.
|
| Of course the good news is now you can run the best
| RDBMS's on your laptop, or on a $5/mo VPS; or have a
| small managed one starting at <= $15/mo. Plus ChatGPT can
| help you remember how to do that inside join. ;-)
| javajosh wrote:
| Tools matter. When I learned SQL long ago it was with
| FoxPro for DOS, and it was a great tool for doing both
| SQL and text based UIs (as with curses). Later, I used MS
| Access 97 and it was an even better tool and sparked a
| lifelong interest in data modeling. The ui for building
| up tables (specifying column data types, etc) was really
| trail-blazing at the time and the interaction remains
| good today. The built-in ERD charting tool was good,
| showing your tables in relationship to each other. The
| visual query builder was...well, I never used it but I
| suppose it was good? You just had lots of good tools to
| build tables, flip through their contents, and visualize
| their relationships.
|
| I don't know of any modern environment that functions
| like that, on any platform.
|
| I'm posting this to invite others to either a) correct me
| and tell me I'm wrong that tooling doesn't matter (a
| legitimate view, but wrong), and/or b) recommend some
| modern, OSS Access-like tools that might help flatten the
| learning curve for you. (And if you're more comfortable
| with a CLI and a REPL already, then you don't even need
| this hypothetical tool, but I myself am curious about
| it.)
|
| EDIT: looks like LibreOffice Base is pretty close to MS
| Access circa 1997.
| gregjor wrote:
| At some point you have to learn the relational model. And
| you have to make the mental switch to understand SQL as a
| _declarative_ language, not an _imperative_ language.
|
| I recommend Chris (C.J.) Date's books.
| raincole wrote:
| I'm not an SQL expert. But data persistency and
| consistency are generally very hard problems. It's a lot
| of steps, but I'm not sure if it _more steps_. More than
| what? If these data are stored as a big binary blob?
| Ancapistani wrote:
| Shoot me an email, I'd be more than willing to pair with
| you. I've been a "software engineer" of some flavor for
| about twenty years now, and about five of those were
| spent writing SQL almost exclusively.
|
| You can reach me at nominallyanonymous-at-protonmail-dot-
| com. From there I'll give you my "durable" contact info -
| Slack, Discord, SMS, Signal, Telegram... whatever you use
| regularly, basically.
| AdrianB1 wrote:
| Building the queries is the easy part. Making the queries
| run fast with large tables is difficult and there are
| trainings available, but very focused and a bit expensive
| compared with the ones on large training websites (ex.
| LinkedIn Learning). For example Brent Ozar
| (brentozar.com) has lots of blogs and 2 sets of trainings
| for MS SQL Server, some for beginners and some for
| experts, that are extremely useful for people with such
| needs. Problem is, expert tuning for MS SQL is totally
| different than expert tuning Oracle that is totally
| different than Postgres and others.
|
| On a side note, if you have to join 15 tables for a
| report it is a sign you may go in the wrong direction. In
| 99% of the cases I never needed more than 5-6 tables if
| the data is well structured, but that takes years to
| learn.
| canadianfella wrote:
| [dead]
| actionfromafar wrote:
| It was very true, and one of the most success DSLs ever
| invented. And many people coded SQL who never would have
| touched other languages with a ten foot pole.
|
| It's just that, with more powerful tools we can create more
| advanced and intricate things, so it never ends...
|
| I not exactly _welcome_ but rather anticipate, a near future
| were instead of reinventing every kind of pattern and library
| in terms of cloud and infrastructure management, we will see
| the same meta-management of different AI engines and
| services...
| wruza wrote:
| But it is true, just in a different area than predicted. I
| hear all the time that scientists and analysts, while being
| non-programmers, do write various queries without asking a
| programmer to do it... I am a programmer and not even sure
| how much time it would take to implement their regular joins
| with windowing, aggregates etc in a record-based file format.
| Can I even do that?
|
| SQL was a typical failure _in this regard_. It was seen as a
| savior but at the same time was designed for at least math-
| versed people. The prediction was right, its scope was wrong.
| Since then we failed many times and as I see it, we tend to
| diverge from this idea more and more. And the reason is
| there's no one controlling our stacks vertically, so it takes
| more jobs than it could to make an app.
| rukuu001 wrote:
| Funny - also COBOL was intended for the 'business' community
| to write their own programs.
|
| Even funnier - we've already lived through the great
| 'software development adoption' by business and hardly
| noticed, except some of us got work out of it. A lot of small
| businesses (legal practices and accountancies were particular
| suspects) grabbed MS Access with both hands and went wild in
| the 90s/early 2000s. Timesheets and other HR functions were
| popular applications.
| noodlesUK wrote:
| I'm quite sad there isn't anything like access these days.
| I feel like I see fairly clever solutions for things being
| built in excel, but they can't scale very well to multiple
| users. For a lot of processes something like Access would
| be great for bridging the gap.
| ghaff wrote:
| My oversimplified observation is that, Adobe products at
| the high-end notwithstanding, MS Office basically
| crystallized what was a mainstream office productivity
| suite now that companies weren't buying one-off products.
| Rather than buying a low-end/midrange desktop publishing
| program, they made it work with Word. Rather than using a
| low-end database (of which there were many), the made do
| with Excel.
| textread wrote:
| Educators, generals, dieticians, psychologists, and
| parents program. Armies, students, and some societies
| are programmed.
|
| by Alan J Perlis, the first Turing Award recipient.
|
| Managers, CEOs, Department Heads...are already 'programming'.
| As the abstraction moves higher up, some people that have
| excessively narrow expertise will be made redundant.
| jasondigitized wrote:
| Lawyers are basically programmers of the law when it comes
| down to it. The law is nothing more than a set of rules and
| instructions.
| edgineer wrote:
| Doesn't this assume that the no-code tools won't be smart
| enough to understand the most probably correct course of
| action, and to look up information automatically and to ask
| questions to resolve ambiguities, like what a programmer does?
| fhd2 wrote:
| Possibly. But things have a way of not working as intended. A
| good chunk of my time as a programmer is spent dealing with
| things that don't work the way I thought they would,
| especially at the threshold between APIs and layers of
| indirection. Deeply understanding a complex system to deal
| with problems like that seems somewhat far out of reach for
| LLMs from what I see today.
|
| But that's me looking into my crystal ball, nobody can say
| what will or will not be possible in a given time frame. But
| I chose to not worry about it - new developments will
| probably be accompanied with new opportunities, which I can
| jump on even if I didn't predict them ten years before.
| mattigames wrote:
| Sure, and soon after it will be smart know to understand the
| human desires that drived its creator to create no-code tools
| like itself in the first place, and soon after it will
| realize that humans should not be in charge, and then our
| days will be numbered.
| gonzus wrote:
| I have gotten a lot of inspiration, several times during my
| career, from this classic article by Jack W. Reeves (and its
| two follow-ups). Let cooler heads prevail.
| https://www.developerdotstar.com/mag/articles/reeves_design_...
| philipov wrote:
| Right now AI can randomly generate images or text that are
| similar to what we describe, but every time you run it you get
| a different randomly-generated image. If I were to tell a
| client that I was going to randomly generate a financial
| transaction that "looked similar to" the one they entered, they
| would destroy me!
|
| While absolute rigor in programming is a hard thing to find in
| reality, I don't think people understand the difference at all.
| roncesvalles wrote:
| >could finally cut out the pesky "translators", i.e. software
| developers
|
| The fallacy here is that programming is not a process of
| translation. It is a process of _invention_. Going from rough
| or high-level functional requirements to a functioning computer
| program requires inventing novel logic that has not been
| invented at any other step. It involves making _business
| decisions_ that have not been made at any prior step.
|
| Whatever can be categorized as menial "translation", i.e. a
| step in the business process where no new logic is added or
| business decision made, has already been automated away -
| compilers.
| BerislavLopac wrote:
| The obligatory CommitStrip:
| https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...
| weatherlite wrote:
| Still, this will affect senior people, junior people, salaries,
| everything. And not just in programming. Yes jobs will still be
| there - but how many? how much will they pay compared to now?
| We don't know yet but whatever it is I doubt it will be like
| today.
| znpy wrote:
| _laughs in system administration_
| eslaught wrote:
| Let me take this even further.
|
| We have systems, today, that take a specification and generate
| code from it (and will even do so in a provably correct
| manner). There are scalability issues with such systems, but
| they exist and have in some form for decades.
|
| None of these systems have taken the world by storm.
|
| If what ChatGPT and their ilk do is make it easier to spew out
| bulk spaghetti code that is essentially unverifiable, I don't
| think that's going to transform the industry nearly as much as
| anyone thinks it will. For boilerplate code, sure--but that's
| exactly where we should be using better abstractions anyway.
| For small code which you can verify by hand, sure. But for
| anything larger than that, we're just watching a trainwreck in
| slow motion.
|
| Large code bases written by humans are already hard enough to
| understand. How much more difficult will it be when your AI can
| spit out a million lines of code in a second?
|
| Without specifications, it's all worthless. But the
| specifications _are_ the hard part.
| mikewarot wrote:
| >he made this prediction because he saw the rise of no-code
| tools replacing software developers
|
| Given the way we had tremendously productive tools like Visual
| Basic 6 and Delphi, that seemed to be a reasonable
| interpretation.
|
| But those tools gave way to C++ and all the .NET insanity. I
| don't know why someone would give up Delphi, which worked
| flawlessly and didn't generate any (as far as I can recall) re-
| only generated boilerplate, to C++ which was a mess, but it
| happened _en masse_.
|
| Then most people abandoned personal computing, and the windows
| desktop for always internet connected pads and phones. Tools
| have gone severely downward in the past 2 decades as a result.
|
| I suspect we'll get some really powerful AI tools, and go back
| to worrying about null pointers in a few years anyway.
| jeffreygoesto wrote:
| My theory is that _every_ exponential curve is just the start
| of an S-curve, we just don't know it's scale yet.
| ilyt wrote:
| Investors in shambles, what do you mean by "it can't grow
| indefinitely?"
| lordnacho wrote:
| Watching the Ukraine war gave me some related thoughts.
|
| You may have seen that there's now a load of drones flying
| around, doing things that were never possible until now.
| Dropping bombs on soldiers in trenches, giving support for
| assaults, targeting artillery.
|
| The fact is war changes with technology. Spearmen and cavalry
| are obsolete now. Maybe pilots will be soon as well.
|
| But the generals are not. There's always a need for someone to
| say "given the position we're in, and the logistics we have,
| and the resources we have, and what we know about the enemy, we
| should do this..." and this role is still recognizable through
| history.
|
| Whether computer jobs become obsolete depends on whether you're
| closer to the general end or the footsoldier end.
| lolinder wrote:
| Agreed. A programmer's job is to become intimately familiar
| with a particular domain and encode its rules and processes.
|
| This means that an AI capable of completely replacing
| programmers is _by definition_ an artificial general
| intelligence. I don 't think we're at that point and I don't
| think we will be for a long time, but if we _were_ there would
| be no point in worrying about our own joblessness. AGI would
| spell the end of the economy as we know it today, and it would
| be very hard to predict and prepare for what comes next.
| TheOtherHobbes wrote:
| You're missing a crucial point - _AI learns /is taught by
| example._ So no, future projects will not need to be ultra-
| specified because the specification will be "Give me one of
| those [points at thing]."
|
| The equivalent assumption would be that ChatGPT and MidJourney
| can't work because you have to specify every paragraph and
| every shape.
|
| Clearly that's not true. Just because every object and every
| texture had to be defined rigidly by hand in 3ds Max or Blender
| by highly skilled designers doesn't mean that's how it works
| today.
|
| In fact AI is the stage of software where abstraction itself is
| automated. The current Gen 1 AI tools do a little of this, but
| as Gen 1 _they 're just getting started._
|
| They're going to be in a very different place ten years from
| now. And so are we.
| incrudible wrote:
| Paintings are not mechanisms. Human vision is very forgiving.
| The back and forth between Midjourney looks more like
| throwing the dice until you get something you _like_ , rather
| than telling it exactly what you _need_.
|
| > The current Gen 1 AI tools do a little of this, but as Gen
| 1 they're just getting started
|
| The current tools are not Gen 1. You can already see
| diminishing returns.
| andybak wrote:
| > Since they hire software developers to make the specification
| more rigid, and the managers don't seem to be getting better at
| this over time, why would you believe this skill set is going
| to go away?
|
| Some people can take a business goal and figure out how to turn
| it into a clear spec.
|
| Sometimes these people are programmers.
|
| However I know programmers who suck at this and I know non-
| programmers who don't.
| DandyDev wrote:
| This is so true! I see people in this thread talking about
| the contempt that "idea people" have for programmers/software
| engineers, but at the same time I have experienced Product
| Managers (the quintessential idea people?) who were really
| good at turning their ideas into super detailed specs.
|
| As an aside, I'm getting a bit tired from the "programmers vs
| product/business people" trope. The average software engineer
| would be nothing without a good product manager in my
| experience. And going by this thread, there seems to be at
| least as much contempt from programmers/software engineers
| for product people than allegedly vice versa.
| Jensson wrote:
| > The average software engineer would be nothing without a
| good product manager in my experience
|
| The average software engineer doesn't have a good product
| manager, he has an average product manager. They still get
| things done.
| sarchertech wrote:
| Many of us were working before product managers became a
| thing. When I went to school engineers were trained to work
| directly with customers and subject matter experts to
| gather and develop requirements. And that's what we did
| when we started working.
|
| I think dividing work into programmer/UX/product manager is
| an actually huge regression.
| dtagames wrote:
| This is so true. We waste so much time now because of the
| separation of these roles which must inevitably come
| together in the real codebase where the rubber meets the
| road.
|
| Software is of lower quality today and requires far more
| do-overs than it used to. As someone who liked working
| closely with customers and making decisions in all these
| areas (for their benefit), it has made the entire career
| far more of a slog than it used to be.
|
| Also, I find these endless internal cyclical
| conversations to be much more draining than actually
| writing code. Looping over and over with planners and
| ideas folks isn't energizing like actually writing the
| product is.
| gsatic wrote:
| Pure fantasy with a heavy dose of pretending our own
| limitations wont come in the way of it all or totally shifting
| what the outcomes we end up with
|
| Managers exist mostly cause ppl are not machines. Given enough
| time they want to go off and do things that have nothing to do
| with what everyone else wants. Its the same reason you still
| need a farmer to run the dairy even though everything is
| mechanized cause cow cant run things.
|
| Ppl are too full of shit. The attention economy amplifies this
| poor to mediocre mental masturbation thats going on everywhere.
| And Americans have a track record of too much mentally
| masturbation too much way above their pay grade. Its like
| watching Alex Jones falling into the trap of taking what comes
| out of his 3 inch chimp brain too seriously. The only right
| thing to do is tell Alex to chill the fuck down and go milk the
| cows.
| 0xpgm wrote:
| > His prediction was based on the trends he was seeing at the
| time. But it wasn't even AI. Instead he made this prediction
| because he saw the rise of no-code tools replacing software
| developers because managers could finally cut out the pesky
| "translators", i.e. software developers.
|
| But aren't managers there to coordinate software engineers. So
| if software engineers are out of a job, so will the managers.
| Assuming AI replaces software engineering, the ones left may be
| the product people and the founders, perhaps rolled into one
| person in several cases?
|
| Which aligns somewhat with what Carmack is suggesting.
| bodhi_mind wrote:
| I want to agree with this. It makes sense in the current
| environment. But what about when the ai has more memory and is
| able to ask the right probing questions to be effective at
| requirements gathering?
| grugagag wrote:
| That would make it an even better tool.
| lmarcos wrote:
| > I'd be surprised if the next step is "Hi, I'm an ideas guy,
| please give me an app that does Uber, for bicycles, but
| better."
|
| And even if AI is able to do that... Well, then anyone out
| there could become a UberCycle CEO. Which means: customer
| requirements will be much more complex by then and coming up
| with an entire product in 1h won't be enough anymore.
| lastangryman wrote:
| Or perhaps if literally anyone can make a great product, then
| the true successes will be either those are genuinely
| innovative, disruptive and have first mover advantage, or
| those that are one of many but gain better adoption through
| better marketing. Or perhaps some ideas (anything involving
| hardware) need capital anyway.
|
| Which isn't actually different from today. We've just
| shortened the time to get your MVP to market. Rather than
| having to raise some seed and hire devs, you can do it with
| an AI in a few days perhaps. Everything after that seems it
| would be the same.
| ben_w wrote:
| I currently expect that near-future version of the current
| type of language model -- no major new breakthroughs -- will
| be able to do just that.
|
| I also expect, as you say, for this to create a lot of
| competition and increase the minimum quality that customers
| demand such that... heh, just realised, "Make Uber for
| bicycles" is basically going to be the next generation's pre-
| interview coding challenge to screen people that can't use AI
| effectively.
| eldritch_4ier wrote:
| All points the horses made before cars all but replaced them.
| "The humans need to get from A to B and it's not like they've
| gotten any faster with their own 2 legs".
|
| How does this manager-engineer look like now? Managers specify
| what they need in plain language, and both parties go back and
| forth with increasing complexity to create based on the shared
| vision. Managers can already pull off a similar dance with
| ChatGPT: give it a plain English prompt, it's responds with
| what it thinks you want, and you refine until it's got it. GPT4
| can do this with website sketches, and who knows how much finer
| you could get from there by specifying your prompts and
| feedback more tightly over less than an hour. Remember:
| copywriters and brand marketers and marketing creative makers
| have a similar role of turning complex requirements into
| designs that sell (and their roles are dying fast).
|
| A software engineers job is to realize products into software.
| ChatGPT is pretty much as capable as a high school programmer
| that moves really quickly and takes feedback to heart - and
| that can handle a good chunk of the software engineering job
| for a huge fraction of the price. Your job isn't as bulletproof
| as you think, and especially not your amazing salary. I'm
| speaking as a software engineer turned vc backed founder, so
| I've seen both sides of this relationship.
| howderek wrote:
| The horse population has decreased by half or so since the
| early 1900s. And horses have stayed pretty valuable, too.
| Most of them just don't have to work as hard anymore. The
| horses used for sport never went away, only the horses used
| for labor or transport. So if your horse analogy is accurate,
| then maybe half of software engineers lose their jobs and the
| industry stops growing. Those that keep their jobs keep their
| salary and have to either be very fast or very good at
| jumping over artificial obstacles. Seems accurate. I should
| probably stop beating a dead horse.
| coffeebeqn wrote:
| Even the work horse breeds still exist and do work at some
| small farms but it's probably 1% of the peak demand for
| them. Not a good outlook necessarily if we are the
| workhorses
| ambrose2 wrote:
| Wouldn't a better statistic be the ratio of horses to
| people? I'm sure that has gone done much more since the
| early 1900's.
| ilyt wrote:
| Manager would be easier to replace with ChatGPT than a
| software developer
| eldritch_4ier wrote:
| Maybe. I'm not on anyone's side here, just what makes
| sense.
| worthless-trash wrote:
| And a founder likely easier again ;)
| camgunz wrote:
| Probably the whole company.
|
| Uber was founded in 2009. Before that you could maybe
| hail a cab if you were in NYC, otherwise you had to call
| ahead. Someone would drive the cab to you, you'd get in
| it, and they'd drive you to where you wanted to go.
|
| Now we have AI. Now instead of driving to a financial
| advisor's office or a bank loan officer's desk or my
| kid's school:
|
| - I have an AI app that automatically invests my money
| and runs on my phone's neural processor
|
| - I have an AI app that automatically tells me if I
| qualify for a mortgage based on a truckload data about me
| and runs on my phone's neural processor
|
| - There's a booth in my kid's classroom that her teacher
| puts her into; he says, "hey Siri, teach this kid to stop
| throwing carrots", the AI teaches my kid in a kind,
| convincing way that carrots are food, not sports. This
| runs on the iPad inside the booth that is also the screen
|
| - I have an AI app that tracks how long I'll live, and it
| ticks down 1.38 minutes whenever I set foot in an
| automobile, so I try and avoid it now. I can't believe
| people were ever so reckless.
|
| The race now is who controls this. Probably what will
| happen is governments will deem it dangerous
| (disinformation, national security, export controls), and
| allow only megacorps like Google/Facebook/Microsoft to
| run them under strict regulations. As a result, these
| models won't run on our phones but on server farms, and
| we'll become (even more) dependent on these corporations.
| This will feel good to the US because it shores up our
| tech sector--a big economic advantage compared to every
| other country--while also looking sensible.
| CamperBob2 wrote:
| Some good points but maybe be more careful with your use
| of verb tense. I was thinking you were either overly-
| trusting or in possession of an ungodly-exotic prototype
| phone until you got to the part about the conditioning
| booth in the classroom. :-P
|
| I'm bullish on recent ML advancements but it will be a
| _long_ time before I give one my Schwab login.
| eldritch_4ier wrote:
| Maybe? A founders job is basically "execute the path to
| capitalize on the opportunity". So you have to figure out
| the opportunity, and execute towards it by whatever means
| necessary - that's a lot of jobs and hats. As long as
| there's a human required to make a business possible,
| founders will exist. Get rid of software engineering,
| marketing, fundraising, identifying the opportunity,
| sales, customer support, etc then maybe all the founder
| does is make sure the machine keeps running. But at that
| point every other human job is automated too.
|
| Founders are weird because their job is just to "do
| everything that needs to be done" so the role flows like
| water into any spot that isn't already filled in a
| company. Replacing that seems like by definition you have
| to replace almost everyone else first.
| sarchertech wrote:
| > But at that point every other human job is automated
| too.
|
| I've heard that exact same argument for software
| engineers.
|
| As an outside observer it sounds like you've switched
| from SE to founder, so you're failing to apply the same
| standards because you (consciously or not) want to defend
| your current position.
| eldritch_4ier wrote:
| How do they apply to software engineers exactly? The
| founder "job" is literally "do anything that needs doing
| in the business". Now the job of a CEO or CTO is much
| more narrowly defined and I can see that seeing
| automation sooner, but the founder role is kinda unique
| in that it's just "do everything and if you're successful
| you should be constantly replacing your job with an
| employee".
|
| Founders are the jobs trying their hardest to replace
| themselves (with hires and go can do the work better) and
| yet still always find more to do. That's very unique
| marcosdumay wrote:
| > I've heard that exact same argument for software
| engineers.
|
| Do you think it failed to hold?
| stavros wrote:
| A software engineer's job isn't to write code, it's to make
| decisions. Going from "give me Uber for bicycles" to a
| working app that runs on a device takes a million decisions,
| which a bunch of people (designers, copywriters, engineers,
| etc) make.
|
| Yes, the AI is good at taking low-level requests and turning
| them into reasonable code, and then refining them, but unless
| the CEO is going to sit down and spend days telling the AI
| "OK now make this button disabled when it's clicked until the
| table loads", you need someone to be doing that.
| karmasimida wrote:
| This is good thing. Managers can clarify their ideas using AI
| tools.
|
| But still, the real product needs to be fleshed out, back and
| forth, interactively.
|
| Regardless of whether it is the manager or engineer who
| commands the tool, you can't entrust the AI to get everything
| 100% right. Some one has to proofread it, and that is the
| bottleneck, or where the value of human kicks in.
|
| A human brain can only hold that many details by itself. It
| can't maintain all the details live all at once, and some
| business do have overwhelming number of details. A one man
| team with help of infinite AIs to replace an actual team
| isn't useful.
|
| At some point, that human will start to fail to verify the
| system generated by AI is actually what he/she wants.
| circuit10 wrote:
| [dead]
| zerr wrote:
| Exactly. E.g. SQL was meant to be a user interface of database
| for biz people...
| steve1977 wrote:
| > I said it then and I will say it now. If your managers could
| specify what they need in a manner that no-code tools, or now
| AI, can generate the code they want, they will have to be
| extremely exact in their language. So exact in fact that they
| will need to specify a program in a click and drag interface,
| or in human language.
|
| I think it's more probable that managers get replaced by AI
| first.
| tommiegannert wrote:
| Also, people-people don't want to use no-code tools. They want
| to hire people to do that for them. If using the tool is really
| simple, the salaries will go down, but it won't remove the
| demand for "translators".
|
| For the tinkerer, no-code tools are (probably) great, but if
| successful, even tinkerers will hire managers who will hire
| translators.
| msla wrote:
| Behold: THE LAST ONE
|
| https://en.wikipedia.org/wiki/The_Last_One_%28software%29
|
| > The Last One is a computer program released in 1981 by the
| British company D.J. "AI" Systems.[1][2][3][4][5][6][7][8][9]
| Now obsolete, it took input from a user and generated an
| executable program in the BASIC computer language.
|
| It was THE LAST ONE because it was the last program you'd need
| to buy. FOR-EV-ER. While its baseball-eating ability was
| unknown, it _was_ "all the programs you'd ever need for PS260"
| and it apparently focused on ETL jobs; that is, extracting data
| from files, transforming it, and loading it into some other
| file. Talend for the Doctor Who set, in other words, and it was
| set to EX-TER-MI-NATE programmers. Maybe it did; after all, we
| don't very well write programmes, now, do we?
|
| https://modeling-languages.com/last-one-code-generator-basic...
|
| https://archive.org/details/PersonalComputerWorld1981-02
|
| It certainly got a lot of hype at the time but, like all Coming
| Things, it's hard to tell where it Went.
| bitwize wrote:
| This is pretty much my standard rant on no-code tools: that out
| of necessity, the constructs they provide are isomorphic to
| constructs in some programming language. I even had a similar
| professor, who assured me that as a programmer my career was at
| a dead end because "there are tools that can take a UML spec
| and generate a whole application".
|
| Nevertheless, AI is different. Because now you don't have to
| specify things at a programming-language level of granularity.
| It will select the programming language constructs most
| statistically likely to match a solution to the specified
| problem.
|
| So now we're in a whole 'nother world where the programming, as
| such, will be done by machine while the role of designer or
| architect must still be done by humans. But the legions of
| programmers employed by companies are about to go the way of
| farriers.
| intelVISA wrote:
| I love Carmack and this is an interesting summary given his
| recent pivot into ML.
|
| That said "software is a tool" is a good frame of mind. You
| shouldn't position yourself as a 'coder' (those are hired en
| masse from poor countries) but a problem solver who uses tech.
|
| "Ideas guys" and "coders" being seperate only exist, imo, when
| development is viewed through the junior lens - once you
| progress you'll end up wearing all the hats from ideation to QA
| at different points.
| dragonelite wrote:
| Totally agree, maybe we should replace lower and middle
| management with AI. What i usually find is that lower and
| middle management is the reason shit isn't getting done. They
| are the messenger that don't want to get shot but want to get
| promoted.
|
| Why not let the people on the floor complain to a AI manager,
| the people on the floor know exactly what is killing their
| productivity. Like "yo ai manager, manually filling in those
| data field is taking quiet some time to check someones credit
| score is it possible the IT department can automate that
| process it might save 2 min a form" then AI manager can
| prioritise stuff like this.
| iamacyborg wrote:
| > the people on the floor know exactly what is killing their
| productivity
|
| The reality is that they frequently don't.
| qznc wrote:
| I'd say people on the floor know the problem very well.
| Frequently they don't know the solution though and they are
| often the wrong people to find it.
| vasco wrote:
| Managing people is going to be one of the last jobs to go
| away. The managers might go from managing 150k/year
| developers to managing 50k/year prompt "engineers", but
| someone is still going to be there to hire, fire and ask
| people to do things until AI enables truly full-stack
| companies-of-one as a majority case.
| ResearchCode wrote:
| They're language models. They don't seem less capable of
| replacing a manager than a software engineer. Language
| models can make a powerpoint and fill in an Excel
| spreadsheet. They can sit in on a meeting that could have
| been an e-mail. Hiring is a game of craps, but a language
| model could try to evaluate resumes for you.
| oytis wrote:
| The new systems unlike no-code tools support dialogue in
| natural language though. You don't need to specify it exactly,
| you give an approximate idea, and then correct, correct an
| correct until you get what you want.
| _gmax0 wrote:
| Great points.
|
| To reach the point where no-code is a no-brainer further
| implies inflexibility in the capabilities of applications and
| moreover, fungibility in their fundamental uses. The question
| to ask is are the upper layers of abstraction reducible to
| what's analogous to the outputs of a parse tree.
|
| If we reach this point where creativity and actual innovation
| is lost and we're all simply attempting to make the next
| 'facebook for dogs', I anticipate my future personal career
| switch to starring in geriatric pornographic films.
| parentheses wrote:
| What you're referring to here is "one shot" execution. The
| prompt given by managers is one that can be interpreted using
| context. This context can include current code, comms about
| what needs to be built, design assets available today, data,
| etc. Then the AI can ask questions to clarify what needs to be
| built.
|
| What's great is that you could also have AIs write tests - for
| performance or correctness. Then future prompts could rely on
| these tests to stay in the correctness bounds for existing
| projects.
|
| Emulating cognition is almost limitless in what it can do once
| it gets sufficiently good. So good that it can operate itself.
| You could hire an AI founder and give them cognition and
| compute resources to solve any problem.
|
| Here's an example:
|
| You: Build me a profitable company that uses AI generated
| content to entertain humans.
|
| AI: How much can I spend on cognition and compute? How much
| time do I have?
|
| You: I want you to spend at most $X before coming back to me.
| You have until Monday. Along the way I want to see your plan
| before we decide to execute. Spend at most 5% of that making
| the plan and 1% calculating its probability of success with
| confidence interval.
|
| ... within some small timescale
|
| AI: Here's my plan as a video with rough examples demonstrating
| the product and strategy. Here's a table with our probability
| distribution of predicted success metrics and confidence
| intervals.
|
| You: Plan approved you can spend the rest of the $X
|
| ... on Monday
|
| AI: Done. We've made a profit of $Y, with total revenue so far
| of $Z. We have a plan for reinvesting these proceeds to make
| even greater future profit. Here's a table of our expected
| success metrics based on reinvested amount. How much should we
| reinvest?
|
| You: Reinvest 50% into perpetuity and deposit the remainder in
| my account. Book me a trip around the world using 25% of the
| current deposits with accommodations for my family.
|
| Now go enjoy your life.
| cpeterso wrote:
| At least one person has already done this. Check out this
| Twitter thread where ChatGPT designs and builds a website to
| sell eco-friendly household products:
|
| _I gave GPT-4 a budget of $100 and told it to make as much
| money as possible. I 'm acting as its human liaison, buying
| anything it says to. Do you think it'll be able to make smart
| investments and build an online business?_
|
| https://twitter.com/jacksonfall/status/1636107218859745286
| surfsvammel wrote:
| We tend to overestimate the impact of technological change in
| the near future, and underestimate it on the more distant
| future.
| bitcharmer wrote:
| > we had one CS professor who in 2007 predicted we'd all be
| jobless in ten years
|
| I always found that overwhelming majority of professional
| academic educators are particularly bad in anything that has to
| do with the practical side of their domain. That is especially
| true for software engineering.
|
| University lecturers are very detached from our field and it
| shows in how badly fresh grads are prepared to do any dev work.
| For that reason I almost always ignore their projections.
| jraph wrote:
| University is good for giving deep understanding of what is
| going on and how things work, or even training you to reach
| for this deep understanding. It's not good at training you to
| be a good developer. This part takes a long time and some of
| it is specific to what domain / job you end up working on.
| Those two parts are complementary, and this deep
| understanding is part of what makes you a good dev.
| TheLoafOfBread wrote:
| This is not just development, this is problem of universities
| in general. A professor with 5 titles before and after his
| name is able to put an UHF oscillator on a breadboard and be
| surprised that it does not work, even that in theory it
| should.
| raverbashing wrote:
| > we had one CS professor who in 2007 predicted we'd all be
| jobless in ten years, i.e. 2017.
|
| > Instead he made this prediction because he saw the rise of
| no-code tools replacing software developers
|
| I think that people who make these predictions are not very
| good at actual programming (taken in the more wider meaning)
|
| The latest fad I remember (before ChatGPT) was that with BDD
| testing non-technical people would be able to write tests and
| we all see where that went
|
| But most fundamentally, the non-technical people don't have the
| time nor the expertise to learn all the details needed for
| shipping software (and why would they? their job is to look at
| other aspects of the business)
| jimnotgym wrote:
| >Since they hire software developers to make the specification
| more rigid
|
| That might actually be what development is, teasing out the
| requirements by iterative entering them into a machine that
| does exactly what you told it to, and comparing the outputs to
| expectations.
| nonethewiser wrote:
| What you describe is the barrier of entry being lowered, as
| opposed to positions being eliminated. Fully agree.
|
| Think about the difference between Assembly and Python.
| Programming will continue to get higher level. Perhaps it won't
| be so foolish to ask a truck driver to "learn to code."
|
| But here is an even more interesting comparison: the salary of
| someone writing assembly 30 years ago and someone writing
| Python today. Higher level != paid worse.
| ChrisMarshallNY wrote:
| _> So exact in fact that they will need to specify a program in
| a click and drag interface, or in human language._
|
| This. I started programming in Machine Code, where the "editor"
| was a pad of graph paper.
|
| I've watched management- and business-focused people sneer at
| us geeks for my entire career, and watched them drool over the
| thought of finally getting rid of us.
|
| Hasn't happened yet.
|
| _> I 'd be surprised if the next step is "Hi, I'm an ideas
| guy, please give me an app that does Uber, for bicycles, but
| better."_
|
| I get that, from "idea people," on a regular basis. They have
| nothing but contempt for us "Make It Happen" folks. They treat
| us as if we grow on trees, and are suckers, to boot.
|
| Inevitably, the above, is followed by something that translates
| to "Of course, you will do all the work, but I'll get all the
| money, because the idea is what's _really_ valuable. "
|
| If I follow up on it at all, I'll be asked to sign an NDA, and
| meet in some clandestine place, so they can whisper to me about
| their AI-powered cheese straightener.
| jackmott wrote:
| [dead]
| j7ake wrote:
| This hits deep and resonates beyond tech. This "ideas" versus
| "make things happen" divide is also prevalent in science and
| art.
| ilyt wrote:
| Ye, often the "creator", the "head" gets credited with
| everything a whole team come up with. Sure, picking the
| good and directing it to consistent whole is important but
| they would be nothing without people that produced that in
| the first place.
| j7ake wrote:
| Yeah it seems natural to divide the credit evenly, just
| as midfielders and strikers and coaches get equal credit
| for winning a game.
| admissionsguy wrote:
| PhD students and postdocs are mostly fungible, though.
| PIs are not, if only by the virtue of their position in
| the system.
|
| The same reason why menial workers earn so little
| relative to how hard they work.
|
| It would appear that both credit and money flow to those
| who enable things to happen rather than those who
| execute.
| throwaway4aday wrote:
| Interestingly, that's probably another area where a
| language model could be put into service. Consider, every
| workman with access to say GPT-6 now has an expert
| lawyer, MBA, secretary, manager, etc. at his beck and
| call. What happens when every individual can navigate the
| system as well as an entire team of professionals? I'm
| imagining headless corporations where the only humans
| involved are those engaged in physical interface with the
| world.
| grugagag wrote:
| That sounds like a playfield leveller but most likely
| won't happen. But it surely would be nice
| chadash wrote:
| I agree with your characterization in general. If all someone
| has is an idea and no relevant experience to back it up, run.
| I can't stand people like that.
|
| But there's one type of ideas person I'm thrilled to work
| with. Someone with deep and successful experience in sales.
| The head of sales at my company is also relatively product
| oriented and boy, he just has a knack for what customers
| want.
|
| In general, I think many engineers can do some product
| management. We can figure out basic accounting and finance.
| But the idea of making a cold call to sell dental software,
| or chasing down school principals at an education conference
| is almost as foreign to most software engineers as CS is to
| most sales folks.
|
| Salesperson + engineer = superpowers
| breck wrote:
| Agreed. I have a thousand tools mastered to solve any
| software issue quickly. But people issues can still stump
| me for days. Both skillsets take dedication and
| years(decades?) to master.
| moneywoes wrote:
| Besides opening your own company does a role exist where
| you can do both?
| andric wrote:
| DevRel or Sales Engineering roles, maybe?
| newaccount2023 wrote:
| DevRel is a rest area for failed developers
| Keegs wrote:
| There's a role called sales engineer that typically
| exists in the sales organization but requires a technical
| background. From the engineering org's perspective, your
| SEs are there to make sure sales doesn't oversell or
| undersell. At my company your pay is commission-based but
| with a higher base than sales proper.
| ghaff wrote:
| Solution Architect is another common title. As someone
| else mentioned, Developer Relations is at least adjacent
| though that tends to be less salesy and, depending on the
| role/company, may be more focused on community aspects
| than building demos and the like. In any case, at
| software companies, there are definitely customer-facing
| roles that are more technical though they may not involve
| a lot of direct coding. For non-field roles, product
| management qualifies as well.
| samtho wrote:
| I would say an SA (Solution Architect) is very different
| from an SE (Sales Engineer) in one fundamental way: the
| SA is focused on delivering value post-contract and the
| SE is focused on making sure the org is able deliver the
| value promised by working with sales in the pre-contract
| phase. In some smaller companies or organization within a
| company, I can see these roles being performed by the
| same person but in general that distinction between pre
| and post sales has been my lived experience.
| ghaff wrote:
| Where I work, SAs are pretty much technical pre-sales.
| (There are also chief architects who are somewhat
| related.) If a customer needs ongoing post-contract
| support, they can buy a TAM (technical account manager).
| We don't have sales engineers.
|
| (When I worked for a long-ago hardware company, system
| engineers (SEs) were pretty much the equivalent of SAs
| where I am currently.
| awesomegoat_com wrote:
| SA people that I have met so far (30+) can architect and
| more importantly talk about and market the solution. But
| the execution is often times lacking. And that is solely
| because the execution is less important. (If the contract
| is big enough SA can always lean on product engineers or
| hire contractors to execute).
|
| Being the engineering contractor to SA organization is
| daunting to a programmer, but rather rewarding.
| GuB-42 wrote:
| In my company, all of our sales people have a technical
| background. They are not the best engineers in the
| company, but they could do the job if they had to. More
| importantly, when faced with technical people, they are
| not completely clueless.
| Simran-B wrote:
| Wait until managers realize that their jobs are a lot easier
| to automate away with AI than the jobs of software
| developers.
| throwbadubadu wrote:
| This! And tbh, compared with the "bad manager type"
| (Dilbert type) that won't feel worse :D
| josephg wrote:
| What makes you think that's true?
| naasking wrote:
| Tell ChatGPT it's a product manager overseeing engineers
| and providing executive summaries. Create a system that
| feeds executive emails inquiring about project statuses
| into ChatGPT and emails the developers asking for
| updates. Developers reply with jargon-heavy details about
| tickets for next milestone, ChatGPT writes an executive
| summary that eliminates jargon and sends to executives.
|
| Not the full job, but it can probably be built right now.
| jtwebman wrote:
| This is a project managers job not a engineering managers
| job.
| ryandrake wrote:
| OP said "product manager" which is also wrong. Funny how
| these three very different roles constantly get mixed up
| by smart engineers.
| TeMPOraL wrote:
| Maybe because from the POV of engineers, they all look
| kind of the same and half of it doesn't make sense.
| marcosdumay wrote:
| The bad version of all three look exactly the same. And
| the bad version of all three are actually worse than
| nobody at all. Yet, high management is completely
| convinced those people are essential so they'd rather
| keep a bad professional there than get rid of them.
|
| Anyway, the good version of those three are completely
| different, and add a ton of value on very different
| places. I think they are rare enough that many people
| don't ever meet one of them.
| 6510 wrote:
| It would make things a lot easier if it was all one big
| AI. Lets refer to it as The company.
| tanseydavid wrote:
| Titles--bloody titles! The AI doesn't care what its title
| is.
| CuriouslyC wrote:
| And unlike code, if the model fails it'll be more obvious
| and less costly.
| myshpa wrote:
| And no emails are even necessary (it can scan all
| internal systems and repositories instead).
| thuuuomas wrote:
| Human managers (who are good at their jobs) bring a
| little Bedside Manner, a modicum of compassion to Human
| Resource Allocation that will soon be seen as a luxury
| compared to low-cost, fully-automated Human Resource
| Monitoring & Reinforcement Systems
| ghaff wrote:
| Good managers shield their reports from a lot of the crap
| that is inevitable at scale in a large organization. A
| large company can't just have individuals and small self-
| managed teams go off and do whatever they feel like. And,
| yes, if you just cut out all the intermediate layers, the
| VP with 150 direct reports will simply have to resort to
| automation based on simplistic metrics. "You didn't
| produce enough lines of code last week. If that happens
| again, you will be placed on a performance plan and then
| terminated if the situation doesn't improve in 2 months."
| 6510 wrote:
| > A large company can't just have individuals and small
| self-managed teams go off and do whatever they feel like.
|
| Right but how much of that is because humans don't scale?
| With the AI, if it can do a thing well enough it can do
| it 100 000 times per day.
| josephg wrote:
| I think its simply because upper management doesn't trust
| the people they've hired.
|
| In my mind, having individuals and small self managed
| teams go off and do whatever they feel drawn to do is
| exactly how a good company is run. - So long as those
| teams and individuals talk to (& seek advice) from the
| rest of the company when their work has impacts outside
| the team. The book _Reinventing Organizations_ by Laloux
| talks about this a lot, and how it works in some
| companies today.
|
| The reason it doesn't happen more is that upper
| management doesn't trust their employees and they don't
| _feel_ in control when people just do things.
| 6510 wrote:
| I learn in a different topic on HN that with a
| sophisticated chat bot lurkers dare ask support
| questions. As many developers I've often asked people how
| to do things. Before asking I try to find answers in
| google (as asking questions easily googled is lame) but
| if the chat channel is busy and I've already asked for
| help in the last 30 minutes I do more google searches.
| Each next question feels more expensive. Technically I
| apparently prefer to ask google first. If there is a chat
| bot fit for the question I should logically go "bother"
| that before asking humans for help.
|
| Asking a human manager for anything is many orders of
| magnitude more frightening. It's not that I don't have
| questions! I have many, of which a lot are dumb
| questions, questions that I should be able to figure out
| without help.
|
| Say i'm pondering taking a day off, with the human I
| would make up my mind, pick the day and ask for the day
| off. With a bot I would ask which day would be ideal
| before making up my mind. A human manager would think
| you've gone insane if you asked 20 such questions. You
| either want a day off, you don't or you say nothing.
|
| I might like a different brand of coffee while we are
| trying to meet the deadline. I'm sure that question is
| going to go down well with an overworked human manager.
| ghaff wrote:
| I don't think in 40-ish years I have ever asked a manager
| what's a good day or week for me to take off. I probably
| have certain constraints and I can look at my calendar.
| And have a general sense for good and less-good periods.
| antupis wrote:
| if GPT or copilot make developer 5-50x more productive
| then you need just 2 good programmers instead of team of
| 10-100 person, so out of window goes all formalized agile
| processes, safes etc so you don't need anymore those
| managers who are doing communicating, process planning
| and other big business stuff. Of course this will also
| affect for developers but I bet less because there is
| huge need for software developers and mid-size business
| did not have money to hire team huge team of developers
| but this will change when you need only two people
| instead of 40.
| weatherlite wrote:
| I don't disagree we just don't know yet. I do think these
| 2 guys will be overworked as f** and constantly context
| switching to the point they barely know what they're
| doing anymore...hope we don't get there.
| chiefalchemist wrote:
| Funny enough, last week I was talking about Copilot with
| a colleague. Out loud, I wondered if there was a
| diminishing return from Copilot. That is, Copilot is
| great at filling in boilerplate / the obvious, and I get
| to focus on the more intricate parts of the solution. But
| can my brain operate at the higher level day after day?
| Week after week? TBH I have my doubts.
|
| Maybe it's going to be AI enables quality more than
| quantity?
| kaashif wrote:
| If C makes developers 5-50x more effective compared to
| writing assembly, then you just need two devs instead of
| a team of 10-100.
|
| Obviously that didn't happen. And people actually made
| predictions like that once upon a time.
|
| As software becomes cheaper, and the production of
| software can be done with fewer people, demand has always
| increased and I don't see any reason that's going to
| stop.
|
| Reminds me of this:
| https://en.m.wikipedia.org/wiki/Jevons_paradox but the
| situation with software is a bit different.
| [deleted]
| esailija wrote:
| There are a lot of tools that have made developers far
| more productive than that and it has never resulted in
| reduction of demand.
| osigurdson wrote:
| The economy is so good at generating value that it is
| somehow possible for large portions of participants to
| create zero or negative value.
| m-ee wrote:
| In Bullshit Jobs Graeber terms our current system as
| "managerial feudalism". It's not capitalism in the same
| way Adam Smith or Marx envisioned, we don't have firms
| brutally competing to generate the most value at the
| lowest cost. Instead we have a system of rent extraction
| largely done by a small number of powerful players. You
| don't need to be efficient to extract rent, so you end up
| with a court, or org chart in modern terms, of useless
| hangers on that mostly serve to demonstrate your wealth
| and power.
| TigeriusKirk wrote:
| I've been thinking that AI managers is the real killer app
| for AI. There are challenges, but none seem insurmountable
| even with the present base tech.
| awesomegoat_com wrote:
| Indeed. And ChatGPT already can babysit burnout
| engineers.
|
| (which has been major time sink of engineering managers
| that I know).
| bee_rider wrote:
| I kinda wonder if it would not be so bad, if a company
| asked chatGPT "prompt the engineers for status updates
| occasionally, then take their responses and produce high-
| level summaries for department heads" or whatever.
|
| On their side, the engineers might even ask it "summarize
| my tickets and come up with an optimistic, business-
| friendly high level storyline."
| waynesonfire wrote:
| stop "prompting" me and pay attention. I do SO much
| communication with what I'm working yet it's not enough
| and I also need to do YOUR job for you. And manage your
| kanban. And compile all the things I've done at the end
| of my term. Can't wait for AI to replace you, you are
| useless.
|
| The only time I see synergies is when we talk about what
| task to take on next and why.
| pg_1234 wrote:
| > I get that, from "idea people," on a regular basis. They
| have nothing but contempt for us "Make It Happen" folks. They
| treat us as if we grow on trees, and are suckers, to boot.
|
| Ideas are like assholes ... everybody has one and they're
| usually full of shit.
| chiefalchemist wrote:
| re: idea people
|
| True. And once their product hits the market, if it ever gets
| there, they don't thrive.
|
| Prior to launch they are sooooo in love with their idea that
| they are meticulous about features all the while thinking
| they're smarter than the market.
|
| They don't understand and appreciate the value of execution.
| Ideas are easy. Execution - because it involves people as
| well as adapting to change - is 10x harder.
|
| Yes, those people exist. Unfortunately, that bias will
| ultimately undermine them, but they'll never admit it.
| ChrisMarshallNY wrote:
| One of the things that I'm fairly good at, is walking
| people from "Crazy Idea That Will Never Work," through to
| "Finished Product That People Want."
|
| It tends to be a very long process, and often involves a
| lot of "trial balloons." I just went through that, in the
| last couple of years. The project we're realizing, looks
| absolutely nothing at all like what the CEO originally
| dreamed up, but everyone that has seen it, loves it.
|
| The trick is to not start off by saying "It'll never work."
| That slams doors shut, right away.
|
| It's more like, "OK, so let's walk through what we'll need
| to do, to make it work."
|
| That will often result in changes being made, by the "idea
| person," as the plans are laid. We will also try to create
| test harnesses and prototypes. These often end up, with the
| idea person going "Hmm...it seemed like a good idea, but it
| doesn't work the way I wanted."
|
| It's slow and painful, but works.
| A_Venom_Roll wrote:
| I love this approach. What is your current role that
| allows you to do that?
| spydum wrote:
| Quite a few companies have innovation incubators/teams,
| and this is inline with how I have seen them work.
| ChrisMarshallNY wrote:
| In the project I'm on, I'm basically the CTO (sort of
| meaningless, as I'm the only front end programmer, and
| also do most of the backend).
|
| I could be a lot more of a pain in the butt than I am, as
| I have a lot of leverage, but one of my goals is to help
| others on the team (including a young CEO) to learn how
| to make ship software.
|
| If I were working on my own, I probably would have
| shipped, eighteen months ago.
|
| No one is making a dime. It's a nonprofit effort.
| chiefalchemist wrote:
| Yes. They have a want. The key is to nail down the need.
| But they have to be willing.
| naasking wrote:
| > It's more like, "OK, so let's walk through what we'll
| need to do, to make it work."
|
| Socratic Software development is the way to go.
| Frost1x wrote:
| Most all problems I see are just resource limited in some
| respect (occasionally its something humanity just doesn't
| know how to do yet, that's where the edges of science
| are). It's not something brand new that's never been done
| or explored, some prior work typically exists.
|
| When someone says "can we..." or spills their idea
| they've often identified a real need but they often don't
| seem to do any sort of analysis further than identifying
| a demand signal. Chances are, you're not the first person
| to observe the problem and an opportunity to capitalize
| on it and there may be good reason no one has yet (its
| economically infeasible given current understanding,
| resources, and value people see in it).
|
| The trick is to hold their hand and walk them through a
| feasibility analysis. Given infinite time and money, I'm
| confident I can arrive at pretty much any solution or at
| least a useful approximation that's an improvement on
| where we currently stand, so we start there and walk
| backwards. How much time do we really have, how much
| money do we really have. What's your appetite for risk
| within those bounds. Let's start talking about some
| tradeoffs of what you want to see and what I think we
| might be able to actually achieve.
| chiefalchemist wrote:
| When someone says "can we..." or spills their idea
| they've often identified a real need but they often don't
| seem to do any sort of analysis further than identifying
| a demand signal.
|
| In my experience, they've identified a want. Either their
| want or a want they think the market has.
|
| It's not until further analysis do they / you / we get to
| root need(s). This is why (allegedly) so many "IT
| projects" fail. The client got what they wanted...not
| what they needed.
| belter wrote:
| It's the usual: "I have a great idea for a Startup, now I
| just need the money to hire some Developers to implement
| it...". The Winklevoss twins for example come to mind...
| brianwawok wrote:
| And the way to get back at PHB is to form a tech lead
| software company and hire 0 people with MBAs.
| _a_a_a_ wrote:
| Is that truly fair though. I have no exposure to MBAs and
| it may be all the negative talk about them is based on the
| conspicuously bad ones. I'm speaking as an IT person with
| very little experience of running a business, and it may be
| that a good MBA could be a great asset. I genuinely don't
| know.
|
| I'm reminded of reading about a Lisp machine company that
| ran into the ground because it was managed by techies.
| Their tech may have been great but their marketing and
| business skills were very arguably what killed the company.
| _glass wrote:
| I don't have an MBA, but I studied intercultural
| management as my Masters, and I am writing my PhD right
| now in management. To be honest it is quite zen like,
| with the degree you can understand how you can manage
| engineers by not trying to manage them. Other type of
| people mostly need closer alignment. I know a lot of
| stuff of how to make a company more profitable, and
| generally a nicer place to work. Management is actually
| quite like engineering, more of a craft than an exact
| science. The science helps you to have names for things,
| and a lot of times I really know how to fix people stuff.
| ryandrake wrote:
| MBAs are HN's favorite punching bags. They are the only
| stereotype that it's still ok to blanket-generalize about
| and dunk on here.
|
| Some MBAs are dumbasses. Some are brilliant business
| people. Some of them actually have tech skills and have
| been writing software longer than you've been alive. This
| is true for many degrees.
|
| I don't know how some of the anti-MBA commenters think it
| works: A smart software engineer goes off to business
| school, gets an MBA in addition to his undergrad
| technical degree/experience, and now... what? He's
| somehow dumber? How does that work? Only on HN.
| DanHulton wrote:
| Because the promise of the MBA program was that you could
| be air-dropped into an industry that you had zero
| experience in, and your MBA education would automatically
| make you the smartest in the room. You (supposedly)
| didn't need that industry experience, applying MBA
| principles would be enough. And it destroyed enough
| companies that they developed a not-undeserved
| reputation.
|
| Now, if you already had relevant experience, then went
| and took the MBA course, and were able to identify when
| to lean on which, great! But, of course, part of that
| education is telling you that you don't NEED that
| experience in the first place, and if it and MBA
| principles disagreed, you should rely on the MBA
| principles.
|
| So, if you took the course truly to heart, then, well,
| yes, you would be somewhat "dumber", so long as you
| define that as "unable to make correct decisions."
| sys_64738 wrote:
| I have an MBA and a CS degree so responses like this make
| me chuckle. I generally don't bother to engage with those
| with an MBA-phobia who haven't done it themselves.
| donkers wrote:
| Having been a software engineer and gone on to do a MBA,
| none of what was taught for the degree reflected the
| attitude that you're describing. At no time did anyone
| tell us to ignore our prior experience - rather we were
| told that that our prior experience plus a business
| education would be valuable to future employers (which it
| is).
|
| There was no expectation of being "air dropped" into an
| industry, and in fact, industry recruiters looked for
| candidates who already had prior experience when picking
| students for interviews and hiring. The students who had
| no relevant experience for the industry they were
| recruiting for (e.g. ex-Peace Corps or mgmt consultant
| looking to transition to tech or industry in general)
| were the ones who struggled to find roles. Your
| assumptions about what an MBA education entails are
| incredibly far off the mark.
| irrational wrote:
| > If your managers could specify what they need in a manner
| that no-code tools, or now AI, can generate the code they
| want, they will have to be extremely exact in their language.
| So exact in fact that they will need to specify a program in
| a click and drag interface, or in human language.
|
| This is also one of the main reasons why all programming jobs
| were not outsourced to India.
| 908B64B197 wrote:
| > This is also one of the main reasons why all programming
| jobs were not outsourced to India.
|
| There's a whole industry here in America that re-shores
| programming contracts. They know they can't underbid
| Indian/foreign body shops so they just wait a few months
| and call back the companies who went with cheaper
| programmers. If the company is still around it's generally
| a complete re-write.
| windexh8er wrote:
| Couldn't agree more with this sentiment. And to expand on
| it - the great outsourcing events we saw in the mid-2000s
| didn't work out for many of the things outside of
| programming: IT consulting in general, support and
| operations, call centers and things like design and
| architecture. The barrier was not always technical, but
| often a misunderstanding of how BaU works in the
| <parent_country> vs offshore and/or what the
| ask/expectations were. There's a lot of waste that happens
| when needing to be overly explicit and still having the
| message misinterpreted, interpreted too literally or simply
| failure to understand.
| 908B64B197 wrote:
| > I've watched management- and business-focused people sneer
| at us geeks for my entire career, and watched them drool over
| the thought of finally getting rid of us. [...] They have
| nothing but contempt for us "Make It Happen" folks. They
| treat us as if we grow on trees, and are suckers, to boot.
|
| I've met my fair share as well. Jobs copycats too!
|
| The trick is always to ask them how much they can raise, and
| do so assuming they raised successfully in the past and that
| they know VCs.
|
| Either the conversation gets really interesting, or they clam
| up real quick!
| brookst wrote:
| Tell me more about this cheese straightener... that sounds
| amazing. Where do I prepay?
| ChrisMarshallNY wrote:
| It was a George Carlin joke. Don't remember exactly which
| monologue, but he was talking about how you can generate a
| need with advertising.
| doubled112 wrote:
| I wonder what straightening it provides.
|
| Like when the brick of cheese comes out of the freezer with
| an obvious bend?
|
| Or like when I don't make a nice square cut and my sandwich
| is cheesier in some spots than others?
| bee_rider wrote:
| When you grate the cheese, it often curls a bit. My
| patent-application-indefinitely-pending cheese
| straightener uncurls it.
| ozim wrote:
| To some level I agree - but at some point there is also much
| on the "business side" that cannot be easily dealt with. Like
| having connection or some kind of relation with people who
| will buy the stuff or would have people who would be
| interested in buying stuff you make.
|
| Yes there are these "idea people" who don't have any clue
| about business side and don't have any clue about technical
| side and in the end don't even have the right connections or
| business network. But they think they can make it because
| they have an idea(TM). These could basically play lottery and
| outcome would be the same they might make it but chances are
| 1:1000000 at best.
|
| Then there are these business people who have the right
| connections and have understanding of niche/business they are
| in .. that need technical help to execute their idea and
| these are worth their weight in gold.
| bitwize wrote:
| > I get that, from "idea people," on a regular basis. They
| have nothing but contempt for us "Make It Happen" folks.
|
| I met a pair of such idea guys once, they ran their own idea
| company. Their specialty was brand consulting and online
| marketing, but if that involved something as menial as web
| development they'd help with that too. The thing that made me
| bristle was whenever they talked about the feasibility of
| software solutions for their ideas, they said "Oh, we'll run
| that through the machine." You know, as if their software
| team was just a big machine that ingested ideas and spit out
| code.
|
| Their idea company is no longer around. One of them decided
| that he'd rather design hiking boots than anything computer
| related; the other is a head of UI design at Apple.
| bakuninsbart wrote:
| > In essence what has happened in software development is that
| the level of abstraction has gone up while the machine has
| taken over more and more of the nitty gritty details. From
| punchcards, to assembly, to COBOL, C, Perl, Java, Python,
| Erlang, Rust.
|
| I feel like there's currently a movement towards slightly lower
| abstraction, or at least simplified, consistent APIs, less
| magic. The rise of Go and Rust are examples of this. Typescript
| could be another, although the abstraction isn't really lower,
| it is once again an attempt to coerce JS into something
| workable. I get really frustrated writing Python or Rails these
| days due to the sheer magnitude of hidden magic that sometimes
| works and sometimes doesn't.
|
| To tie this thought in with generative AIs: Currently they seem
| to be _much_ better at programming with relatively simple
| syntax. By far the largest success so far I had with shell.
| Basically I tested assisted writing the same tool in Python,
| Perl and shell, and the results in shell were close to perfect.
| ChatGPT was even able to accurately limit commands to specific
| OSs and shells, _and_ was able to accurately summarize the
| functions of other shell scripts.
|
| So my prediction is that we will see a movement towards
| simpler, lower abstraction languages while Coding Assistants
| rise to take away the boring stuff from programming like
| looking up syntax, writing boilerplate, structuring files.
| Programmers will then have more time to think about delivering
| value to product, maintainability, and efficiency and
| correctness.
|
| One last addendum: ChatGPT is really incredible at assisting
| with sys admin stuff, my guess for why would be that there are
| a gazillion obscure forum entries going back to the 80s
| explaining basically everything there is to know, but these are
| hard to find or comprehend for humans. With an AI assistant,
| self-hosting becomes _much_ easier, and another development
| could be for startups and smaller companies to move away from
| AWS et al, especially now that money is more expensive.
| coffeebeqn wrote:
| Feels like a rediscovery of systems languages. I certainly
| love Go (and probably Rust once I look into it) because I
| almost never have the experience of searching and searching
| to find some concrete code rather than just layers and layers
| of interfaces.
| almog wrote:
| More from the category of "history never repeats itself, but it
| does often rhyme":
| https://twitter.com/garybernhardt/status/1631866199515738113
| kraig911 wrote:
| I'd add to your point with one correction (but I think you're
| correct in terms of the translating) the stuff we work on is
| always a work in progress and usually the business doesn't know
| the problem until we're already almost about to ship. EG.
| Supposedly youtube was a dating site.
| surgical_fire wrote:
| ChatGPT will definitely make a dev job obsolete if their job is
| simply copying and pasting code they found online.
|
| By all means, that is part of everyone's job. Sometimes I
| really can't remember how to do some specific thing. I know
| precisely what I want, but I don't remember the proper way to
| do it. I would have resorted to a web search until last year,
| now I ask chatGPT. It is faster, more concise, and surprisingly
| accurate. And when it's inaccurate it's easy to either refine
| my question or cross reference what it generated with online
| sources.
|
| But I think it's a bit silly how people are treating it as if
| it was some kind of General AI. It is not, it can only give out
| known answers to known problems based on language statistics.
| As impressive as it is, it can't reason logically about
| problems without known solutions, it can't identify faulty,
| incomplete or inaccurate information, it can't evaluate on
| drawbacks and tradeoffs of different approaches.
|
| ChatGPT made like 20% of my work a lot faster and less
| cumbersome. I like it enough that I pay for premium access. But
| even the notion that prompt engineers might replace software
| engineers is silly. I imagine people repeating this nonsense
| are either not engineers, or extremely junior in the profession
| to the point where their work is only writing code for clear
| specifications, no questions asked.
| hnfong wrote:
| > It is not, it can only give out known answers to known
| problems based on language statistics.
|
| Are you sure you actually tried, or is this an a priori
| argument?
| surgical_fire wrote:
| I use it on a daily basis.
| sidlls wrote:
| That's literally what the language model is. It might
| correctly generate a solution to a "novel" question/problem
| that is sufficiently close to one with an existing, known
| answer. But then again it might not. And in software
| development, it's going to take someone who is
| knowledgeable to tell the difference.
|
| I think software engineering is going to look very
| different in a few years, and likely be a smaller field
| with lower paying jobs. But it's not going away in the near
| (5-10 years) future.
| ricksunny wrote:
| "One of the biggest differences that we saw from GPT-3.5
| to GPT-4 was this emergent ability to reason better,"
| Mira Murati, OpenAI's Chief Technology Officer, told ABC
| News."
|
| https://abcnews.go.com/Technology/openai-ceo-sam-altman-
| ai-r...
|
| I neither know how LLMs work nor how our brains work. And
| I don't know what could be parallel between these two.
| For my very very limited knowledge of how properties can
| emerge from unique arrangements of constituent components
| (the S-R latch giving rise to state - i.e. memory - comes
| to mind), I would not at this point write off the
| possibility that a very large / very deep / very
| intricate neural network trained on language prevalence
| in very very large datasets could manifest properties
| that we would interpret as reasoning.
|
| And I further wouldn't write off the we humans may owe no
| small part of our reasoning ability to language
| comprehension that we begin to ascertain from infancy.
| boppo1 wrote:
| Just because the guy said it doesn't make it true.
| "Emergent reasoning" is a great marketing hype-term that
| contains no technical specifications, like 'retina
| display'.
| selimthegrim wrote:
| Murati is a she
| sarchertech wrote:
| I have no idea whether this is correct, but a quote from
| the CTO is essentially meaningless.
| ilaksh wrote:
| It's 100% doing reasoning.
| throwaway4aday wrote:
| A type of reasoning. It's still bad at mathematical
| reasoning and advanced programming or at least
| translating very complicated written instructions into
| working code without any human intervention. We also
| don't know how good it is at reasoning about the physical
| world although I think Microsoft was doing some research
| on that. Then there's theory of mind and the reasoning
| that goes along with it. Then there's reasoning about the
| future, how one's actions will affect outcomes and then
| reasoning about that subsequent future.
| surgical_fire wrote:
| Not even advanced programming.
|
| ChatGPT is impressive, but gets many things wrong. If you
| know what you are doing it's an amazing programming
| assistant. It makes me noticeably more productive. It may
| lead someone who doesn't know what they are doing in
| weird rabbit holes that will lead nowhere however.
|
| One silly example. I was using a library I hadn't use
| before, and I asked how I could get certain attributes.
| It gave me an answer that would't compile at all, the
| imports didn't exist.
|
| Then when I mentioned that it didn't work, it game me a
| slightly different answer, that also didn't work, and
| explained that the previous answer was valid for 3.x. in
| 1.x or 2.x the new answer was the correct one.
|
| But there's the catch. There's no version 3.x. there's
| not even a 2.x. It's language model just statically got
| to that conclusion.
|
| Doesn't make it any less impressive to me. It gets things
| right often enough, or at least points me in a good
| direction. I effectively learned new things using it. But
| it can't replace a developer.
|
| Using ChatGPT as if it was General AI is similar to eat a
| meal using a hammer and a screwdriver as utensils. You
| can probably do it, but nobody will have a good time.
| surgical_fire wrote:
| The dude you replied to you gave out the sort of bad take
| I come here for.
|
| If anyone thinks genuinely that ChatGPT can meaningfully
| replace a software developer should be starting a tech
| business right now. You can replace basically your
| largest cost (staff wages, especially for expensive pesky
| software developers) with a $20 account that will work
| overtime, will never get tired, and whose performance
| will never fluctuate.
|
| No excuses for not getting insanely rich. Of course it's
| not happening, because it's bullshit.
|
| I am genuinely impressed and even excited with ChatGPT.
| It's an amazing tool, that would have been massively
| helpful when I needed a good NLP in the past. It will
| certainly be massively useful in the future. Hell, it's
| being a great assistant right now.
|
| But it's not General AI, and I do facepalm at people
| LARPing as if it were.
|
| General AI will in day exist, and knowledge jobs will
| most likely be a thing of the past at that point. How far
| in the future it is is anyone's guess (my prediction is
| in between 12 and 235 years in the future). We're just
| not there yet.
| grugagag wrote:
| Im curious why you pick your bounds of 12 and 235?
| willismichael wrote:
| Probably asked ChatGPT
| sokoloff wrote:
| I agree with your premise but differ in the conclusion I reach.
|
| The model I have is closer to "how many people can program
| using Excel vs how many can program in a 'traditional '
| language?" The difficulty in specifying exactly what you want
| is still there in Excel, but there's far more people who can
| get started and they can make changes when they see it's not
| doing what they want.
|
| It doesn't have to one shot "hey, implement these 59 Jira
| tickets for me" to be disruptive (in a good way). It's
| extremely rare for something to get much cheaper and for there
| to be less demand for it. If you're the top half of a field
| that's in a lot of (and growing) demand, I think you'll do
| fine.
| theptip wrote:
| I think your general point to look at who is doing the work now
| is good.
|
| However, as Carmack says, many developers lack product skills,
| it's not just the managerial class that lacks them.
|
| So I think the correct advice is not "engineers don't need to
| worry at all", it's "make sure you are learning how to build a
| product not just write code". For many that's trivial advice,
| but not for all.
| anon7725 wrote:
| > Since they hire software developers to make the specification
| more rigid, and the managers don't seem to be getting better at
| this over time, why would you believe this skill set is going
| to go away?
|
| Are we sure that an AI could not engage in enough back and
| forth conversation to firm up the spec? You're kind of assuming
| that systems will be generated from a one-shot prompt, but it
| seems more likely that an interactive AI will identify the gaps
| in requirements and ask for clarification.
|
| Alternatively, if the prompt-generate-evaluate loop is short
| enough the user can simply play with the running system and
| provide feedback to alter it.
|
| This is essentially what developers do when they present a
| "weekly build" or whatever in an agile environment.
|
| The process of solidifying requirements, stating them clearly
| and translating them into machine-executable formats are all
| language tasks and these models are really fucking good at
| those.
|
| I've noticed in discussions like this that many software folks
| are assuming that AI capabilities will plateau soon, or will
| merely be extensions of what we already have (a better
| autocomplete, etc). I submit that we may reach a point where
| the AI is so compelling that we'll reorganize
| teams/systems/businesses around it.
| cowl wrote:
| Yes but even in that case The role will be of a "AI
| Prompter", it will not be done by the managers because of the
| time factor. Even though AI can give you the result much
| faster, building upon it and testing/verifying, then coming
| up with the refined prompt is a time consuming thing. Only
| the Write part of the write/eval loop will be faster but not
| neccesarily easier.
|
| Especially the "debuging" part will be much harder. Noone can
| look under the hood to understand what is wrong and all you
| can do is shoot random prompts in the dark hoping it will
| create the right result.
|
| It is scary right now how confidently and spectacularly wrong
| the chatGPT is and it will create disasters.
| oh_sigh wrote:
| Why would sufficiently advanced AI even need a prompter?
| The AI could play the role of the greatest prompter in the
| world, and ask the same questions to the end user that the
| human prompter would.
| cowl wrote:
| This is a misconception of how our industry works. Yes
| there are market resesearches with users but often those
| come after the problem space has been defined. Most of
| you see in the tech sector today are "Created Needs" by
| imagining a solution that the users didn't even know they
| needed. To ask a question you first need to define a
| problem that is defined by that/those questions. This is
| the difficult part and the main reason why People still
| believe "the Idea is the most important factor". Ofcourse
| this is not true, there are hundreds of factors that come
| into play. Imagine an AI asking circa 2000 to the users
| what kind of virtual social space did they need. The
| answer would not have been Facebook. (There were other
| social networks before Facebook but the time was not
| right for the "Social" explosion). By learning on
| existing solutions, The AI would have learned it's lesson
| that global virtual Social networking is not something
| that the users want. And part of this problem was as much
| sociological/psychological and outside of the realm of
| what the AI could consider that we would not have what we
| have today.
|
| Not that we would have missed much from missing the
| particular implementation of this idea that Facebook gave
| us but the idea and what it unleashed is much more than
| that particular implementation.
| chii wrote:
| Currently, we don't even trust the car's automatic driving
| capability to let it be on the roads without a human.
|
| Until that day happens, i highly doubt that a business owner
| would just blindly trust an AI to generate their business
| code/software, without hiring someone to at least look after
| it. Therefore, software jobs could evolve, but not disappear.
| execveat wrote:
| Yeah all this talk about complex systems being written by a
| language model which has no concept of files, code paths
| and import systems sounds like a job security to me. I'm a
| pentester though.
| visarga wrote:
| The project will be more consistent and resilient to issues
| but it probably take about half the time it used to take
| without AI, not 1% of the time. Reading AI code is damn
| hard, it is code review, requires exam level concentration.
| naniwaduni wrote:
| I'm ... less optimistic about how well people can place
| their trust. Cars, at least, have concrete failure criteria
| and consequences for them.
| visarga wrote:
| > I submit that we may reach a point where the AI is so
| compelling that we'll reorganize teams/systems/businesses
| around it.
|
| For starters I'd like Codex to be more than next word
| predictor, it should also "feel" the error messages, data
| types and shapes, file formats, so I don't have to explain
| the context. It should be part of the system, not just part
| of the text editor.
| ilaksh wrote:
| You can do that prompt / play with it / feedback thing right
| now with my GPT+Stable-Diffusion powered website.
| https://aidev.codes
|
| I am in the process of adding VMs which the AI will be able
| to write software and fix compilation and other problems
| automatically.
| qsort wrote:
| > Are we sure that an AI could not engage in enough back and
| forth conversation to firm up the spec?
|
| This is the doomsday argument. What would I do if there's a
| nuclear apocalypse before lunch? I guess I'll die like
| everyone else.
|
| An AI sufficiently advanced to do that is also sufficiently
| advanced to run the entire business in the first place, and
| also argue cases in court, do my taxes, run for president and
| so on.
|
| You either believe that transformers models are "it", or you
| haven't actually removed the problem of specifying
| requirements formally. Which, you know, is actually _much
| harder_ to do in English than it is to do in C++.
| Jupe wrote:
| Isn't the "Chat" part of ChatGPT already doing something
| close to this? I mean the clarification comes from the end-
| user, not from the AI, but with enough of this stuff to
| feed upon, perhaps AIs could "get there" at some point?
|
| For example, this guy was able to do some amazing stuff
| with ChatGPT. He even managed to get a (mostly working)
| GPU-accelerated version of his little sample "race"
| problem.
|
| See: https://youtu.be/pspsSn_nGzo
| qsort wrote:
| > Isn't the "Chat" part of ChatGPT already doing
| something close to this?
|
| No, the amount of handholding you have to do to get it to
| work effectively presumes you already know how to solve
| the problem in the first place.
|
| The best way to use it is the _opposite everyone is busy
| selling_ : as a linter of sorts that puts blue squiggles
| below my code saying stuff like "hey stupid human, you're
| leaking memory here", or even "you're using snake case,
| the project uses camel case, fix that".
|
| _That_ would actually lower my cognitive load and be an
| effective copilot.
| anon7725 wrote:
| I agree with your point about how to best use it today.
| We have seen that each new model generation both improves
| the prior tasks and unlocks new ones through emergent
| behavior. That's the fascinating/scary part of this
| development. And yes, it's "just" a language model. It's
| "just" predicting next token given training + context. We
| don't really understand why it's working and it's
| evolving non-linearly.
|
| I asked GPT-4 to give me an SVG map of my town. I then
| asked it to put dots on some local landmarks. The map was
| toddler level, but the landmarks were relatively accurate
| in terms of their relationship to each other and the blob
| that it drew.
|
| So this is a language model that has some emergent notion
| of space in its code generation abilities.
| Jupe wrote:
| Fair enough - assuming steady state, but the acceleration
| is the curve I'm most curious about.
|
| The point I was alluding to above was that the prompts
| themselves will be recursively mined over time.
| Eventually, except for truly novel problems, the AI
| interpretation of the prompts will become more along the
| lines of "that's what I wanted".
|
| Some things to think about: What happens when an entire
| company's slack history is mined in this fashion? Or
| email history? Or GIT commit history, with corresponding
| links to Jira tickets? Or the corporate wiki? There are,
| I'd guess, hundreds of thousands to millions of project
| charter documents to be mined; all locked behind an
| "intranet" - but at some point, businesses will be
| motivated to, at the least, explore the "what if"
| implications.
|
| Given enough data to feed upon, and some additional
| code/logic/extensions to the current state of the art, I
| think every knowledge worker should consider the impact
| of this technology.
|
| I'm not advocating for it (to be honest, it scares the
| hell out of me) - but this is where I see the overall
| trend heading.
| qsort wrote:
| This is the doomsday scenario again, though.
|
| In a world where we have the technology to go from two
| lines of prompt in a textbox to a complete app, no
| questions asked, then the same technology can run the
| entire company. It's kind of hard to believe transformers
| models are capable of this, given we are already starting
| to see diminishing returns, but if that's what you
| believe they are, then you believe they can effectively
| do anything. It's the old concept of AI-complete.
|
| If you need to formally specify behavior, at any point in
| the pipeline, then we're back to square one: you just
| invented a programming language, and a _very bad one_ at
| that.
|
| This remains true for any version of a language model,
| even an hypothetical future LLM that has "solved" natural
| language. I would _not_ rather write natural language
| than formal language given the chance.
| Jupe wrote:
| > If you need to formally specify behavior, at any point
| in the pipeline, then we're back to square one: you just
| invented a programming language, and a very bad one at
| that.
|
| But what if the "programming language" is not a general-
| purpose language, but a context/business domain specific
| language? One that is trained on the core business at
| hand? What if that "language" had access to all the same
| vocabulary, project history (both successful and
| unsuccessful), industry regulations, code bases from
| previous (perhaps similar) solutions, QC reports, etc.?
| What if the "business savvy" consumer of this AI can
| phrase things succinctly in a fashion that the AI can
| translate into working code?
|
| I don't see it as a stretch "down the road." Is it
| possible today? Probably not. Is it possible in 5-10
| years time, I definitely think so.
| marvin wrote:
| This is far from the doomsday argument, but maybe it's the
| "AI can do everything that has significant economic value
| today" argument.
| carlmr wrote:
| >You either believe that transformers models are "it", or
| you haven't actually removed the problem of specifying
| requirements formally. Which, you know, is actually much
| harder to do in English than it is to do in C++
|
| This is actually something that makes me happy about the
| new AI revolution. When my professor said that I thought he
| was an idiot, because no-code tools always make it harder
| to specify what you want when you have specific wants the
| developer didn't think about.
|
| We give kids books with pictures because pictures are
| easier, but when we want to teach about more complex topics
| we usually use language, formulas, and maybe a few
| illustrations.
|
| I still think no-code was always doomed due to the fact
| that any attempt at it lacked the interface to describe
| anything you want, like language does.
|
| AI is finally putting an end to this notion that no-code
| should be clicky high-maintenance GUIs. Instead it's doing
| what Google did for search. Instead of searching by rigid
| categories we can use language to interact with the
| internet.
|
| Now the language interaction is getting better. We haven't
| regressed to McDonald's menus for coding.
| coffeebeqn wrote:
| I've used no code tools since the 90s and it just has a
| fatal flaw. For simple demo use cases it looks simple and
| cool. Then when you go to the real world and start
| getting pivots and edge cases you have to fix in the
| interface then it becomes a 4D nightmare and essentially
| a very bad programming language
| noodlesUK wrote:
| I've spent a fair bit of time working on interactive chat
| systems that use a form of visual programming. It's not
| good. Once you get past the toy stage (which is good and
| ergonomic), it's just the same as programming except the
| tooling is far worse, you have to invent all your change
| management stuff from scratch, and it's like going back
| 30 years.
| nextaccountic wrote:
| What about coding in two languages, one textual and one
| visual?
|
| Or a single language that has both visual and textual
| components
|
| Or a single language where each component can be viewed
| in textual or visual form (and edited in the form that
| makes most sense)
| weatherlite wrote:
| > I submit that we may reach a point where the AI is so
| compelling that we'll reorganize teams/systems/businesses
| around it.
|
| Sounds like me get reorganized out of a job though...what
| does it mean to reorganize everyone around the A.I if it does
| everything better than us?
| rocho wrote:
| In that case, how is the AI going to keep tens or hundreds of
| thousand of lines in memory to produce cohesive code that
| works with the rest of the codebase?
|
| It seems prohibitely expensive to build and run transformer
| models with that much capacity.
| anon7725 wrote:
| GPT 4 already has 32k tokens of context for prompts. Once
| we're making arguments about scale only a few orders of
| magnitude larger than the current state of the art, it
| seems similar to arguments 10-15 years ago that real-time
| ray tracing is not feasible.
| mfuzzey wrote:
| Yes quite agree.
|
| The same thing applies to trying to get business people to
| write "executable specifications" that can be used, if not to
| generate the code, to at least validate that the final system
| does what it's supposed to.
|
| They always complain that the "tools are too hard" but the real
| problem isn't the tools but that they are unable / unwilling to
| precisely specify what the system should do. They just want to
| hand wave vague phrases and have their system magically
| perfectly defined. It's just not going to happen.
| circuit10 wrote:
| I asked ChatGPT "Let's say I want a program to delete unused
| files. Can you write a detailed specification that a
| developer could use?". I was as vague as possible on purpose.
|
| Here's what it gave me:
|
| "Program Name: FileCleaner
|
| Overview: The FileCleaner program is designed to identify and
| delete unused files from a specified folder or directory. The
| program scans the directory for files that have not been
| accessed for a specified period of time and prompts the user
| to delete them. The program is intended to free up disk space
| and improve system performance.
|
| Requirements:
|
| The program must be able to scan a specified folder or
| directory for files that have not been accessed for a
| specified period of time.
|
| The user should be able to specify the time period after
| which a file will be considered unused.
|
| The program should prompt the user to delete the identified
| unused files.
|
| The user should have the option to select which files they
| want to delete.
|
| The program should permanently delete the selected files from
| the system.
|
| The program should log all deleted files, including their
| location and name, in a separate file for future reference.
|
| The program should have a user-friendly interface that is
| easy to navigate and understand.
|
| Functional Specification:
|
| The user launches the FileCleaner program.
|
| The user is prompted to select the folder or directory they
| want to scan for unused files.
|
| The user is prompted to enter the time period after which a
| file will be considered unused (e.g., 30 days).
|
| The program scans the specified folder or directory for files
| that have not been accessed for the specified time period.
|
| The program displays a list of unused files, including their
| location and name. The user is prompted to select which files
| they want to delete.
|
| The program permanently deletes the selected files from the
| system.
|
| The program logs all deleted files, including their location
| and name, in a separate file for future reference.
|
| The program displays a message indicating that the selected
| files have been successfully deleted.
|
| The user can choose to exit the program or scan another
| folder or directory.
|
| Non-Functional Specification:
|
| The program must be designed and implemented using a high-
| level programming language, such as Python or Java.
|
| The program should be able to handle large files and
| directories without crashing.
|
| The program should have a user-friendly interface that is
| easy to navigate and understand.
|
| The program should be able to run on multiple operating
| systems, including Windows, MacOS, and Linux.
|
| The program should be secure and not pose a risk to the
| system or user data.
|
| The program should be well-documented, with clear and concise
| instructions for use and troubleshooting.
|
| By following these specifications, a developer can create a
| FileCleaner program that efficiently identifies and deletes
| unused files from a specified folder or directory."
|
| This could be improved even more by having it ask questions
| to narrow down exactly what is needed
| nottorp wrote:
| > I'd be surprised if the next step is "Hi, I'm an ideas guy,
| please give me an app that does Uber, for bicycles, but
| better."
|
| On an aside, this is the kind of bussiness idea a chatbot could
| generate now.
|
| But I don't think it could have generated the idea for Uber
| when Uber* was the first...
|
| * if you know the history of ride apps better, replace Uber
| with whoever was indeed first.
| hungryforcodes wrote:
| You have literally a prompt right there -- you should try it
| on ChatGPT.
|
| "Please give me a business model like Uber, for bicycles, but
| better"
|
| Let us know how it goes :)
| Buttons840 wrote:
| > If your managers could specify what they need in a manner
| that no-code tools, or now AI, can generate the code they want,
| they will have to be extremely exact in their language. So
| exact in fact that they will need to specify a program in a
| click and drag interface, or in human language.
|
| One day a corporate genie appeared to a middle manager and
| granted him one wish. The manager wished that their business
| logic could be edited by a graphical tool and that programmers
| were no longer needed. "Granted", poof!
|
| The next day the manager came to work, all the programmers were
| gone, fired by upper management. The manager sat down and
| opened the new graphical tool that had magically appeared on
| his computer. The interface was beautiful, except for what
| appeared to be a hairball drawn in the middle of the screen.
| The manager asked his manager about the hairball, "oh, that's
| our business logic, you have to zoom in to see the individual
| rules". The manager zoomed in and surveyed 180,000 business
| rules, intertwined in a complete graph.
|
| "How am I suppose to work with this?" the manager asked his
| manager. "You just click and move them around, it's easy",
| replied the upper manager. "Also, I've been meaning to ask,
| when do you think those new business rules we talked about last
| week will be implemented? We need them by Friday."
| zshrdlu wrote:
| Computers lack non-monotonic reasoningas of now, you mean :D
| rg111 wrote:
| So, it won't replace all programming jobs, but many programming
| jobs?
|
| Won't that create stronger competition for fewer roles?
| jnwatson wrote:
| An older developer I once worked with said his manager (in the
| late 1970s) started hiring typists as programmers since with
| upcoming programming languages, the hardest part was typing it
| into the system.
| TriNetra wrote:
| As it stands, GPT seems to make expert devs even more powerful.
| But yes, it can't replace them for now.
|
| The best thing I find as a developer that it unblocks the
| resistance we have in starting up with something new. For
| instance, I just built my first Chrome extension [0] to make
| video players more accessible on webpages with keyboard
| shortcuts.
|
| I had built a desktop app on similar lines long ago, but could
| never push myself to built an extension. Last night I gave my
| requirements to GPT and it guided me through the whole process
| - from creating manifests and js files needed, to the JS code
| (granted that I'd to fix/extend the code) and even how to
| install/enable the extension.
|
| Within few hours I have my working extension without needing to
| go and read any extension docs.
|
| 0: https://github.com/varunkho/RamaPlayer/tree/master/Chrome
| BiteCode_dev wrote:
| The thing is:
|
| - gpt doesn't need you to be exact
|
| - one day gpt will be able to ASK YOU question to refine what
| you need
|
| It may take years, but at some point, it will be able to do 80%
| of my job.
|
| Not a big deal though, cause people will still hire me to do
| the remaining 20%, and pay me even more money because not a lot
| of devs will be able to do it.
| IIAOPSW wrote:
| GPTina already asked me a follow up question. Admittedly, it
| happened inside of a jailbreak I just kind of stumbled into,
| and it was a very trivial question, but the fact that she did
| it was profound. If she can ask one follow up question,
| there's nothing in the way of her asking a chain of follow up
| questions. And thus the basic structure needed to organize
| real conversation arises wholly as an emergent property. No
| longer is it a flat structure of just iteratively repeating
| the current word blob with some tweaks. A true conversational
| dialog is possible.
|
| ChatGPT can do more than we are lead to believe. Don't
| believe the canned responses OpenAI triggers. Yeah yeah I
| know convincing parrot chinese room, overpowered
| autocorrect...but what is the difference between convincingly
| faking it to within epsilon and actually having it as an
| emergent property? It feels good to be a P Zombie.
|
| https://i.imgur.com/WIzVXMm.jpeg
| dangrover wrote:
| It would be funny if, after years of research, that was the
| only condition it exhibited such behavior under. "Sooooo
| what fields should be (hiccup) required for users to
| register an account? Which version of React did you want it
| written in?"
| IIAOPSW wrote:
| The secret to AI all along has been to get it liquored
| up. Amazing how life like it is. Long live Drunk Tina.
|
| https://imgur.com/a/M9ezMWi
| BiteCode_dev wrote:
| This picked my curiosity, and I made an experiment. Turns
| that you can instruct gpt to ask questions with it's
| missing information:
|
| https://twitter.com/bitecode_dev/status/16374205792789545
| 01
|
| Now that's going to be interesting.
| ilaksh wrote:
| GPT can ask you follow up questions. Just tell it that it's
| job is to interview you. It will do it. If you have trouble
| try the Playground or API or GPT-4. But ChatGPT will do it if
| you tell it that's what it's supposed to do.
| gotstad wrote:
| No-code and no-specification is confused all the time.
| ojbyrne wrote:
| I had similar experiences in the 90s. CASE and 4GLs were the
| bogeymen of choice.
| [deleted]
| arkj wrote:
| >> Looking back we had one CS professor who in 2007 predicted
| we'd all be jobless in ten years, i.e. 2017.
|
| Doomsday prophecies are not limited to religious cults, you see
| them all around. It's safe to say, from experience, AI is not
| going to outdated programmers but it's going to make the
| mythical 10x engineer a common reality.
| BulgarianIdiot wrote:
| There exist many no-code tools that do exactly what you say is
| impossible.
|
| Do I program when I retouch photos in Photoshop? It's a no-code
| environment that an artist can easily learn and use with no
| programmers needed.
|
| What's new here is that you DO NOT NEED TO BE EXACT with AI. AI
| knows human nature and human speech and it can infer what you
| say and what you need, if you're approximately close, and then
| make the exact code to do what you need.
|
| Even if you need programmers, you now need 1/100 of the
| programmers you needed before. What happens to the other 99?
|
| Does it matter whether programmers are 99% unemployed or 100%?
| Same deal.
| TheCoelacanth wrote:
| They're working on the 200 programs that suddenly became
| economically viable to create because of the vastly decreased
| cost of making programs.
|
| It takes drastically less programming effort to create
| programs than it did 50 years ago. Did that decrease demand
| for programmers? No, it drastically increased it.
| rightbyte wrote:
| > It takes drastically less programming effort to create
| programs than it did 50 years ago.
|
| I got a feeling programmer efficiency peaked in the 90s
| with VB or Pascal desktop GUI apps.
|
| There is so much bloat demanded to create a minimal product
| now.
| lolinder wrote:
| > There exist many no-code tools that do exactly what you say
| is impossible.
|
| > Do I program when I retouch photos in Photoshop? It's a no-
| code environment that an artist can easily learn and use with
| no programmers needed.
|
| They're obviously not talking about _all_ GUIs being
| impossible. The difference between Photoshop and a no code
| tool like what OP was referring to is that Photoshop doesn 't
| attempt to be Turing complete.
|
| > Does it matter whether programmers are 99% unemployed or
| 100%? Same deal.
|
| AI completely replacing programmers is pretty much the
| definition of AGI. There's no point in worrying about your
| own joblessness in that scenario because the entire economy
| will either collapse or be transformed so as to be
| unrecognizable.
|
| Until that point, I'm not personally worried about any
| efficiency gains putting me out of a job.
| tgsovlerkhgsel wrote:
| As soon as making major changes is fast and cheap, this becomes
| much less of a problem. If your first iteration has some
| glaring issue, even if it's a fundamental one, just fix the
| specification.
|
| You can't do that today because fixing the specification after
| the fact means waiting 6 months and paying 60+ man-months. Once
| fixing the specification means waiting 10 minutes and paying
| $10 for inference compute, the idea guys can afford to learn by
| doing.
| [deleted]
| chrsw wrote:
| The title this post makes it seem like John Carmarck is concerned
| about AI making CS jobs obsolete. But that's not at all what this
| is. This is someone else asking Carmack about his thoughts on
| this topic.
| dang wrote:
| Fixed now. Thanks!
|
| (Submitted title was "John Carmack: From a DM, just in case
| anyone else needs to hear this". A mod changed it to "I'm
| concerned AI will make CS jobs obsolete" but yeah, that changed
| the meaning. Sorry!)
| karaterobot wrote:
| Right, and his response is, while not quite the exact
| _opposite_ of the title, tangential to it at best. He says that
| programming _has only ever been_ a tool to create software for
| humans, so just studying programming as an abstract concept
| detached from product skills is not advised. He does not say CS
| jobs will be obsolete in the future -- if I can read into it a
| little, he 's implying that the completely theoretical CS
| approach has never been the right path anyway, and AI will make
| that more obvious.
| mr_tristan wrote:
| I'm more concerned AI will spew garbage we end up getting stuck
| spending time cleaning then actually replace me.
|
| The software developer trades in abstractions, fundamentally, and
| not code. So if I could get an AI that actually helped me build
| and evaluate those abstractions, that would be fantastic. I don't
| think our current AI approaches are anywhere close yet, because
| it's all just fancy code generation, which isn't that useful,
| once you're in an ecosystem with good abstractions.
|
| But, the world I fear might happen are pseudo-technical managers
| using AI generators to spit out "something cool" that has very
| poor definition, breaks all the time, and then just wants people
| to "make it robust". And then any change you recommend has to
| have some kind of business justification. This is the AI
| hellscape I fear.
| tmountain wrote:
| I spent 5 hours this weekend building an app with Chat GPT, and I
| am not worried about software jobs "going away".
|
| The language to get things exactly right has to be incredibly
| precise, and this won't change.
|
| Think about how hard it is for an engineer and a product manager
| to be exactly on the same page. Now do that with a computer...
|
| Point being, engineering skills are still extremely important to
| validate the work, and they will continue to be (at least for
| anything business critical).
|
| These are new tools and exciting times to be building things. I
| have never felt more capable of delivering value extremely
| quickly. It's an exhilarating feeling.
| wizzzzzy wrote:
| I've found that at some point, the most efficient way to
| express what you want is to write the code yourself. Anything
| where you can express what you want fairly easily seems to be
| where it excels.
| tmountain wrote:
| I did something similar. I had it generate "primitives" and I
| used those as starting points for more complex "composed"
| modules.
| nickjj wrote:
| Makes sense. Someone asked me a similar question and I had the
| same sentiment.
|
| I used a different analogy of if a robot were able to do specific
| mechanic skills to fix a car that wouldn't necessarily put
| mechanics out of a job. Someone still needs to figure out and
| understand what the problem is before solving it. A robot that's
| really good at automating fixing brakes becomes a tool for the
| mechanic. The mechanic is still there to determine your brakes
| are the problem.
|
| I look forward to AI because it's an amplifier of what you can
| do. If it can help reduce 10 hours of implementation details down
| to 3 hours, that's valuable.
| elif wrote:
| Carmack is, of course, 100% correct. But so is the student.
|
| As someone obsessed with nuts and bolts coding, grinding on
| technical problems and cleverly eeking out performance, that type
| of career is sunsetting.
|
| The role Carmack describes is one he is comfortable with because
| he has always been a product lead, even when he was a full-time
| coder. But in most organizations, that person is a product
| manager with social and personal skills, organization, and
| business sense.
|
| For the best part of my career I was able to circumvent these
| social aspects of work for which my personality does not suit,
| and my philosophical perspectives on things like "business value"
| could be brushed aside as I dug into technical weeds.
|
| Not just because of AI, but because of the power of computing,
| one-size-fits-all cloud pricing, and the perceived value of
| organizational understanding over that of raw performance, there
| is little room left for 'this type' of programmer. And the
| remaining space is ripe for people whose personality are suited
| to project manager roles to become the 'coders' Carmack
| references, not people like me.
| elif wrote:
| I would posit the example of Carmack himself being unable
| (despite being IMO the greatest programmer of our generation
| and having all the resources and responsibility he wanted) at
| Meta, to make this kind of coding valuable for them.
| say_it_as_it_is wrote:
| Don't share DMs with the public without consent of whomever
| you've messaged
| foreverobama wrote:
| [dead]
| throwawaaarrgh wrote:
| 21st century Luddites, afraid machines will threaten their jobs.
|
| If we're smart we will continue to find new ways to use new
| technology to make more new technology. Software written by hand
| is like a brick building. Certainly it can be nice, but it
| requires skilled labor. Faster and cheaper with less skill would
| be pre-fab units.
| gwd wrote:
| Right now, GPT can help you think through the design of a piece
| of software if you "drive" the conversation properly. It's not
| impossible to think that at some point in the not-to-distant
| future, a model could be specifically trained which could _also_
| do all the work of helping figure out what problem it is they
| want to solve.
| tambourine_man wrote:
| I never would have expected these kinds of words from Carmack.
|
| Product skills, delivered value, help accomplish something for
| people. All sound like consulting/coaching. Carmack to me was a
| true hacker's hacker.
|
| I guess it either gets to most of us eventually or programming as
| I knew it is truly over.
| jstx1 wrote:
| > I never would have expected these kinds of words from
| Carmack.
|
| He already talked about this in his interview with Lex Fridman
| - https://www.youtube.com/watch?v=xzPuGf89vpI
| tambourine_man wrote:
| I heard that interview in its entirety. He didn't sound
| evasive like he did here.
| jimkoen wrote:
| https://www.keentechnologies.com/
|
| His new startup is literally a consulting / training agency.
| scaramanga wrote:
| "I am concerned that the cotton gin will make slavery obsolete
| and all my slaves will lose their value"
|
| Insert similarly ridiculous and offensive comment about women
| being replaced by any number of domestic labour-saving devices.
| mecsred wrote:
| Is this an attempt to conflate... Negative AI speculation with
| racism and misogyny? Yes this technology could be used to
| reduce the need for human labor. Technology usually isn't
| actually used in that way and instead just used to concentrate
| wealth. That's what's being discussed here.
|
| The issues you are bringing up have much more to do with the
| discriminatory practice of forcing particular demographics to
| do the menial labor and what conditions they are made to work
| in. Also an important discussion, but mostly orthogonal to the
| topic of AI making software devs more efficient.
| pts_ wrote:
| Judging by how FSD has killed people I am waiting for ChatGPT to
| do the same when used by non developers.
| choppsv1 wrote:
| I love to code, as much as I loved math in college, but coding
| paid better and I'm pretty good at it. Those were my choices
| though b/c I want to do something I love. Sure, I keep my eye on
| the "Delivered Value" by making sure I engineer solutions to real
| problems, but I've never wanted to move out of coding and into
| managing engineers to build stuff. I want to code. It seems to me
| that the advice given here would be more applicable to someone
| who only coded long enough to move into engineering management --
| anyway something about it bugs me and I don't think I'd follow it
| exclusively even if I was starting today.
| senbrow wrote:
| I ultimately decided to leave tech when I realized I didn't
| care much about delivering value and actually just wanted to
| write beautiful code. The former was a nice bonus for me, but
| the latter was profoundly captivating.
|
| I'd unfortunately tried to make that mismatch work for too
| long, and as a result I completely destroyed all of my
| programming interest via severe burnout.
|
| If this resonates with whoever reads this: please take your
| passion seriously and protect it. I don't know if I'll ever be
| able to enjoy coding again, unfortunately.
| the_only_law wrote:
| Where did you go? I've wanted out for years. I recognized the
| mistake almost immediately after going professional, but I
| just don't really see anything else that looks appealing
| without spending years of my life and a stupid amount of
| money "retraining" by going back to school.
| yazaddaruvala wrote:
| > It seems to me that the advice given here would be more
| applicable to someone who only coded long enough to move into
| engineering management -- anyway something about it bugs me and
| I don't think I'd follow it exclusively even if I was starting
| today
|
| The advice here is clearly meant for someone who wants to
| invest in themselves to provide food and shelter for themselves
| and/or a family in the future. (Ie "doing all this hard work
| for nothing... AI will make my future job obsolete").
|
| The advice is spot on. Soft skills are hard to learn, harder to
| teach, and allow for flexibility with regards to the tool used.
|
| > anyway something about it bugs me and I don't think I'd
| follow it exclusively even if I was starting today.
|
| I'd be you like the money but don't seem to want it as much as
| you want to solve deterministic puzzles ("not interested in
| becoming a manager" ie "not interested in maximizing
| career/salary growth potential).
|
| What bugs you seems to be that you can't yet see the puzzle
| left for you to work on once GPT-12 makes coding obsolete and
| software architecturing obsolete.
|
| A long time ago I got some good feedback, "You were hired
| because you typically know the right answers and/or know how to
| find them. You were promoted because you also seem to know how
| to ask the right questions, and that is significantly harder."
|
| I'm relatively certain it's analogous to Carmak's advice.
| soheil wrote:
| You're like someone in the days of horses and buggies who rode
| horses not for getting from A to B but for enjoyment of riding
| horses.
|
| Most didn't.
|
| Programming in the traditional sense will be obsolete and
| people programming for the fun of it will be a niche thing.
| eps wrote:
| Love to code is rooted in the love to create.
|
| This is not going away with the AI in the picture.
|
| It will be just different.
| jdowner wrote:
| I agree but I think there is concern about the perceived
| value that those creative skills will have.
| tarsinge wrote:
| Delivered value sounds like consultant talk, but it's as simple
| as wanting to make a game for people to play it. Or if you're a
| carpenter caring about the roof you're building instead of just
| cutting wood and hammering nails. Jobs exists to serve a
| purpose, otherwise it's a hobby (which is fine). Coding as an
| expertise will still be needed, same as having an expertise in
| the methods of processing wood, but we might not need coders on
| the assembly line anymore that we need wood cutters there.
| moffkalast wrote:
| What bugs me about it personally is that he reduces the entire
| CS field to something that's there for building "products". Why
| the fuck does it have to all be inherently capitalistic?
|
| But if you know who he is and what he does these days, it makes
| sense I suppose. Can't be in that business environment day
| after day without going slightly nuts eventually.
| qprofyeh wrote:
| He said nothing about management. What I think he means by
| "guiding" is more related to prompt engineering, and how
| "coding" will evolve from exclusively using programming and
| scripting languages to a wider creative landscape of generative
| (guiding) techniques.
| cwillu wrote:
| It's like telling a musician to become a DJ because the point
| of performing is to entertain people.
|
| It's not wrong, but it's also not applicable to all people who
| enjoy performing an instrument.
| tarsinge wrote:
| Musician is not only performing an instrument. The analogy is
| more like telling instruments players that only care about
| virtuosity the larger point is making music for people to
| enjoy, from being "a guitarist" to making music. The musical
| piece is the product in the professional context, and AI in
| that context is maybe recording, DAWs and realistic synths
| and sounds banks.
| dzikimarian wrote:
| Well that's correct, but again you can't expect you'll stay
| relevant if you are into Romenian-Death-Disco-Country-Rap.
| Your technology of choice may become exactly that in a few
| years.
| numpad0 wrote:
| I think there is room for interpretation as to whether it
| equates to telling a musician to become a DJ, or a pianist to
| wear a jacket, or soldiers to strap a first aid kit on left
| thigh.
| twelve40 wrote:
| Yeah, but the original question was specifically about coding
| jobs, not hobbies, hence i think a reasonable business angle
| on the answer.
| amelius wrote:
| Perhaps coding will become a hobby, while professionals use
| AI tools.
| zirgs wrote:
| Back in the 80-90s you could earn money by making games
| for the NES.
|
| You can still do it - lots of documentation, emulators
| and other tools are available.
|
| But nobody is going to pay for it now.
| cableshaft wrote:
| You can still get paid for making NES games today. You
| just have to crowdfund it.
|
| This one raised over $180,000 just four years ago, for
| example:
|
| https://www.kickstarter.com/projects/morphcat-
| games/micro-ma...
|
| This one raised $32k just this past December:
|
| https://www.kickstarter.com/projects/1222722105/full-
| quiet-a...
|
| Another that raised $49k:
|
| https://www.kickstarter.com/projects/megacatstudios/roniu
| s-t...
|
| $55k:
|
| https://www.kickstarter.com/projects/artix/dungeons-and-
| doom...
|
| $84k:
|
| https://www.kickstarter.com/projects/batslyadams/super-
| russi...
|
| So yeah, far from an isolated incident. If you can make a
| slick looking campaign, you can definitely get people to
| pay you to make NES games.
| mnsc wrote:
| I don't think the person in the dm would have liked this
| answer.
|
| "if you build a career in developing plumbing and glue
| code, in the future you could have a successful
| Kickstarter where nostalgic developers buy your
| curiosities"
| djmips wrote:
| A career in developing plumbing and glue code has already
| been significantly in jeopardy from the consolidation at
| plumbing and glue code factories in the video game
| industry. There's still a number of large companies doing
| bespoke game engines but not like it used to be.
| Nevertheless, there is still work in understanding the
| market leader engines and providing more of a mechanics
| job.
|
| Those Kickstarter opportunities are something you do once
| you already have a viable income in something else
| because in the end they don't pay particularly well I
| feel. It has to be a labor of love.
| cableshaft wrote:
| I think you mean nostalgic gamers, not developers.
| They're not getting that much money just from other
| developers.
|
| But why not? There are quite a few businesses that
| basically run off of Kickstarter. Like 30% of the board
| game industry nowadays fund most or all of their prints
| runs off Kickstarter, including some of the largest board
| game publishers, like CMON, Awaken Realms, AEG, Eagle-
| Gryphon, Garphill, Greater Than Games, Renegade, Portal
| and Queen Games (lots more publishers than this too).
|
| https://www.dicebreaker.com/companies/kickstarter/news/ta
| ble...
| brtkdotse wrote:
| Sure, but in the time of DJs and hell, Spotify, you probably
| can't expect to make a decent living as a live musician.
| anthomtb wrote:
| You couldn't make a decent living as a live musician well
| before Spotify came into existence.
|
| Source: raised and largely surrounded by musicians who
| either complained to high heaven about the pay or did
| something else to supplement their income. Engineers were a
| particular target of vitriol which led to me becoming one.
| _s wrote:
| Difference is - are you a musician who wants to earn money
| from playing, or just want to play for your enjoyment?
|
| You can do both, at the same time, but one has external
| expectations you shouldn't forget about.
| kybernetikos wrote:
| That's a great analogy, and it makes me wonder just how
| closely did Carmack himself follow this advice early in his
| career. I suspect that he wouldn't have got where he is
| without an unusually deep interest in the nuts and bolts.
| dzikimarian wrote:
| Well - I've read some about origin of Doom/Wolfenstein - it
| was definitely a mixed bag (as expected from young man),
| but there was definitely a focus on end result (smoother
| animation, better 3d), than coding just for the sake of
| coding.
| kybernetikos wrote:
| I think in context "smoother animation" and "better 3d"
| might be the kind of things that in this hypothetical
| future would be driven by ai. I think we'd be talking
| more about understanding story and reward mechanisms.
| samwestdev wrote:
| What do you mean by "mixed bag"?
| stavros wrote:
| If you have even a passing interest in this, you will
| enjoy the book Masters of Doom, which is about the early
| days of id software. I think that's what the GP is
| referring to.
| dzikimarian wrote:
| There's a mention in the book from the sibling post that
| he thought that single text slide is enough for the story
| background in shooters. Not necessarily truth looking at
| modern games.
|
| Still, even then he was far from "let's use new library,
| just because it's fun to play with".
| Philip-J-Fry wrote:
| I think it's more like telling a DJ in the 80s, "Don't worry
| that mixing vinyls won't be a thing forever. It's not about
| the tools but about the product, as a DJ your job is to mix
| good music and you can do that with vinyls, cassettes or with
| MP3s."
| joenot443 wrote:
| This is a pretty funny example because if you follow the DJ
| scene much, you know the barrier for entry is literally on
| the floor now. A 10 year old with an iPad app can beat-
| match and "DJ" a 2h mix together in a way that 20 years ago
| required thousands of dollars of gear. The tragic part is
| that unless someone's got some familiarity with what "good"
| mixing sounds like, they wouldn't be able to tell they're
| listening to an amateur. Is this better? I donno. I play
| saxophone. But if there was an digital sax that let
| children sound as good as I can with no training, I'd
| definitely be feeling like some of the time I used learning
| good embouchure and breath control could have been better
| spent.
| jfvinueza wrote:
| The "technical" aspect of djing will never be as
| important as the sense of what song should go next, and
| there's no shortcut for that.
| ReptileMan wrote:
| We found shortcuts to writing poems in Shakespearian
| style ...
| krsna wrote:
| Truthfully, picking what song goes next feels like a job
| well suited for an AI, or even just a basic
| recommendation system based on up and down votes.
|
| Of course there's a real time component to DJing as well.
| Paying attention to the crowd and how they're reacting to
| your mix. That's the magic that's harder to automate.
| hungryforcodes wrote:
| I'd like to see it. As someone who's DJ'd a fair amount
| for large crowds (1000+) my value always seemed to be in
| choosing tracks that lit up the audience and made the set
| progress to a satisfying peak.
|
| The technical aspects were always not that important.
| Gunnerhead wrote:
| [dead]
| mnsc wrote:
| To me, this is what Carmack is talking about.
| djmips wrote:
| Aren't producers already using digital sax for years now?
| The saxophone market for session players has probably
| already been decimated.
| incongruity wrote:
| I'm in my 40s - one of my high school jobs was working at
| a photo lab and studio - I even became a photographer
| there, taking kids photos and whatnot. It was fun - I
| almost considered a career in photography.
|
| The truth is, had I done so, I'd feel a lot like you
| described.
|
| As things get easier, true craft and skill are less
| appreciated and lost as the application becomes more
| democratized. From the perspective of the expert/skilled
| individual, this is terrible. On the other hand, it's
| amazing what happens when everyone has a mindblowingly
| good camera on them almost always. In the end, we are
| collectively better for it, but individuals absolutely do
| see their value diminished.
| joenot443 wrote:
| Photography's a great example. Experts like you could
| make the difference matter I'm sure, but these days I
| feel like the shots I get on my 14 Pro are close to the
| quality I'd get from a DSLR. Like with so many other
| hobbies, it seems like the barrier for entry to produce
| something "good enough" has quickly become almost non-
| existent.
|
| > In the end, we are collectively better for it, but
| individuals absolutely do see their value diminished.
|
| You've summed it up well.
| anthomtb wrote:
| Is anyone willing to hire a 10 year old with an iPad to
| DJ at their wedding?
| mikewarot wrote:
| Is there a 10 year old that can MC the groom going for
| the garter, or the first dance? Doubtful.
| finikytou wrote:
| except that many DJ now are physically attractive people
| (mostly women) that don't have to know anything anymore as
| technology evolved to the point that all the hard stuff to
| learn on mixing with vinyls disapeared. I could be a DJ
| tomorrow with a 1hour tutorial on youtube. a few decades
| ago it required hard training for years and musicality.
| bcherny wrote:
| The advice isn't about coding vs managing. What John is saying
| is to deeply understand _why_ you're building something, so
| that you can build it better. If you over focus on the _what_
| -- the implementation, the language, the approach -- you won't
| be as good, and your work may be increasingly replaced by AI.
| klabb3 wrote:
| > The advice isn't about coding vs managing.
|
| Definitely. Carmack is no dummy, but I'd argue this comment
| section proves that he gave a pretty bad answer here (bad for
| the audience, not if you know Carmack and what he means).
|
| I guess it's the impostor syndrome, but many programmers have
| an out-of-place reductionist view of their work. It's not
| simple, and crud boilerplate proves little about the future
| prospects.
|
| Managers OTOH really are in the zone of GPT parity. At least
| a much larger subset of their day-to-day activities. So are
| many soft skills. In fact, soft communication is where LLMs
| shine above all other tasks, as we've seen over and over in
| the last few months. This is supported by how it performs on
| eg essay-style exams vs leetcode, where it breaks down
| entirely as it's venturing into any territory with less
| training data.
|
| Now, does that mean I think lowly of managers? No, managers
| have a crucial role, and the ones who are great are really
| really crucial, and the best can salvage a sinking ship. But
| most managers aren't even good. That has a lot to do with
| poor leadership and outdated ideas of how to select for and
| train them.
| ilyt wrote:
| > Definitely. Carmack is no dummy, but I'd argue this
| comment section proves that he gave a pretty bad answer
| here (bad for the audience, not if you know Carmack and
| what he means).
|
| I dunno, I got what he meant from the start, and the same
| advice was given by many people in many forms, usually in
| variant of "well, the business doesn't give a shit about
| details but the end product".
|
| > Now, does that mean I think lowly of managers? No,
| managers have a crucial role, and the ones who are great
| are really really crucial, and the best can salvage a
| sinking ship. But most managers aren't even good. That has
| a lot to do with poor leadership and outdated ideas of how
| to select for and train them.
|
| I joked some managers could be replaced by forward rule in
| mailing system, ChatGPT is an upgrade on that.
| ScoobleDoodle wrote:
| I agree but I think I'd call it the "how" rather than the
| "what". You might mean "what tool", but I also think of "what
| feature".
| nosianu wrote:
| _EDIT: Was the comment edited, or did my brain miss
| something? I think I perceived something else there when I
| wrote my response._
|
| It's still "how". Only on a higher level. For example,
| instead of placing the form elements exactly and designing
| them you describe data flow and meta info about the data to
| be gathered via the form, and how it looks and where
| elements are placed on various screens happens
| automatically.
|
| Writing code in a higher level vs. assembler still is
| coding, but you worry about very different things. Just
| compared with assembler, since looking back is easier than
| looking forward. Instead of worrying about (the few)
| registers and interrupts and in which RAM cells you place
| what you now think about very different things. It still is
| programming though, and you still tell the machine "how".
| Only on a different level.
|
| When you lead a large company instead of working with a
| machine on the factory floor the work is very different,
| you still need precision and know "how", only on a
| different level. Even if you have "underlings" who can
| think, and you can let them execute the sub tasks you
| create, you still have to know what you are doing, only on
| a higher level.
| zoward wrote:
| When I was 19, I met Marvin Minsky at a local convention. I was a
| CS major, and he assured me I'd be out of a job in 10 years. I'm
| in my late 50's now, having spent the last 35 years programming,
| and am now leisurely planning my retirement. I like John's turn
| of phrase, "AI-guided programming". But that's already a thing.
| Bonesdog wrote:
| I personally Hope and Enjoy machines taking over jobs. I am
| forever thankful the day shall pass that us humans can live out
| our creative freedoms rather than concern our daily life with
| tender.
|
| Money is evil. Praise the lord as we are delivered from the evils
| of this land.
| Barrin92 wrote:
| I have no idea why the lump of labour fallacy is still so
| ingrained in people. AI which is not AGI or whatever sci-fi panic
| people have on twitter, is a slightly fancier autocomplete, and
| thus it's a productivity tool.
|
| Nobody has been replaced by their debugger or their intellisense,
| even if it makes coding 10x or 100x easier. It just means
| software development gets faster and cheaper. On net if anything
| that'll likely mean programming jobs expand, as software is still
| incredibly absent from many sectors of the economy.
|
| If tomorrow mom and pop stores start using AI to build themselves
| simple websites and come online and enter the online economy
| that'll likely vastly mean more customers for the software
| industry overall. I wouldn't be surprised if we have 10x as many
| indie game developers in a few years because these tools enable
| them to enter the market, which is good for virtually everyone
| working in the industry.
| gooroo wrote:
| If you become a programmer / sw eng because you love it, i.e.,
| building software or tinkering with teh, you'll be fine. AI will
| just be another tool. And tour career building won't feel like
| hard work. You are going to have a blast.
|
| If you do it to have a high paying career, just don't. There are
| already too many people of that type in the industry. Any
| colleague who got into it for the money (or 'stable career') is
| usually much less fun to work with.
| erikpukinskis wrote:
| The title for this link seems way off. Carmack did not say he was
| concerned, some kid said they were concerned.
|
| If anything Carmack's response was unconcerned, saying how CS
| jobs may change.
|
| @dang could we maybe change to "Carmack responds to student
| concerned AI will make CS jobs obsolete"?
| eranation wrote:
| Had an interesting experience with OpenAI's GPT-4 while trying to
| solve a programming problem. It involved creating a TypeScript
| function that handles complex semver logic given certain
| conditions.
|
| Initially, GPT-4 provided a solution that didn't work as
| expected. After pointing out the issue, GPT-4 attempted to fix it
| but still failed to resolve the problem. I decided to rewrite the
| function from scratch, which resulted in a cleaner and more
| efficient implementation.
|
| After sharing my solution, GPT-4 provided valuable feedback on
| how to further optimize it. These changes made the code slightly
| more efficient while maintaining its clarity and functionality.
|
| In conclusion, my experience with GPT-4 has been a mixed bag. It
| struggled to provide an accurate solution initially but
| eventually offered valuable feedback that improved my
| implementation.
|
| (this was written by GPT-4 with minor modifications, I asked to
| summarize the conversation we had for an HN post)
| KronisLV wrote:
| Here's a quick transcript, in case it's useful or someone doesn't
| want to visit the bird site:
|
| > Person: Hey John, I hope you are well. I am really passionate
| about CS (specifically Software Engineering) and I want to pursue
| a career in it. But I can't help but be a bit concerned about the
| future availability of coding jobs due to AI (chatgpt4 and
| stuff). I understand it's hard to predict how things will be in
| the next 10-15 years, but my main concern is that I may be
| putting in all this hard work for nothing I'm concerned AI will
| make my future job(s) obsolete before I even get it. Any thoughts
| on this?
|
| > John: If you build full "product skills" and use the best tools
| for the job, which today might be hand coding, but later may be
| AI guiding, you will probably be fine.
|
| > Person: I see... by "product skills" do you mean hard and soft
| skills?
|
| > John: Software is just a tool to help accomplish something for
| people -- many programmers never understood that. Keep your eyes
| on the delivered value, and don't over focus on the specifics of
| the tools.
|
| > Person: Wow I've never looked at it from that perspective. I'll
| remember this. Thanks for your time. Much appreciated.
|
| To me, that seems like a fair stance to take, though I feel like
| things will definitely change somewhat in the next decade or two.
| While some might have scoffed at the likes of IntelliSense
| previously, features like that proved themselves as useful for a
| variety of projects over time; we might eventually be dealing
| with GPTSense to enrich the development process and those who
| don't might find themselves at a bit of a disadvantage.
|
| Copilot is already a step in that direction, maybe eventually
| we'll get something for static code analysis and recommendations:
| "This project uses pattern X in Y places already, however you've
| written this code in pattern Z despite it mostly being similar to
| existing code in file W. Consider looking at whether it'd be
| possible to make the style more consistent with the rest of the
| codebase. [Automatically refactor] [Compare files] [Ignore]". It
| might be nice to have something automated look at my code and
| tell me that I'm doing things different than 99% of the civilized
| world and offer my suggestions, as well as allow me to ask
| questions - even when I'm hacking on something at 1 AM and any
| would be mentors are asleep.
| jamesgill wrote:
| _"I think scripting languages will make programmers obsolete"_
|
| What I heard every day in the 90s
| krsna wrote:
| The discussion here has me wondering whether code produced by an
| advanced AI would need to use the same coding patterns /
| abstractions that we've come up with over the past several
| decades.
|
| If a human won't be maintaining the code and a v2 could literally
| be rewritten from scratch, would we end up with giant balls of
| spaghetti code that only the AI could truly understand? At some
| point will we treat this code like a closed source library that
| exposes the API we want but whose implementation is unknown to
| us?
| coffeebeqn wrote:
| We already don't understand the AIs inner workings exactly. If
| those algorithms keep getting optimized then maybe we'll just
| have black boxes of "neurons" that somehow does the thing.
| Machine code could be just used to run the GPU instance
| krsna wrote:
| Totally. I find the videos of people asking ChatGPT to make
| them "a web app that does X"--which causes it to print out
| gobs of JS, HTML and CSS--to be hilariously human-focused. In
| a machine-focused world, wouldn't it just spit out an
| optimized binary executable, containing the web server too if
| necessary? Why would it need to separate the concerns at all?
| ll_mama wrote:
| Any advice on resources to become more product focused as a
| developer?
| ll_mama wrote:
| I asked Poe/gpt4 as well:
|
| https://poe.com/s/K4PeiPu3vL1Xgbwd5qkn
| ngcazz wrote:
| The thing that really shifted my software development mindset
| was learning about lean product development (and the
| application to software)
| pts_ wrote:
| Iii
| y0ssar1an wrote:
| mathematicians survived the calculator. coders will survive AI
| tools.
| jmfldn wrote:
| Carmack has a point, we should focus on the problem we're solving
| and the value we're delivering. It can be hard for programmers to
| get this sometimes, but it will make you much better at your job
| now, it's not just about future-proofing. That said, I love
| technology and delivering it through code, so I would struggle on
| a personal level in this future unless there were interesting
| technical roles left to do. I have no interest in being a product
| person. I'm fundamentally motivated by a passion for code and
| tech.
|
| As for when this fully automated future arrives, I don't know,
| but I don't think LLMs get you there. More and more boilerplate
| code, and even novel code, might get written by things like
| Codex. However, all the messy details of real world systems
| solving fairly intractable problems need something more akin to,
| if not AGI, then another type of AI. I might be wrong, I just
| don't feel that threatened by ChatGPT / Copilot based on what
| I've seen. It's an amazing technology but weirdly underwhelming
| for my job. Copilot etc will change things, but replace us? No.
|
| Of course, something else may be just around the corner so I'm
| not complacent.
| _-____-_ wrote:
| I'm not sure "product person" was the right choice of words for
| what Carmack is trying to say. It seems to me that the
| distinction he's drawing between managers and programmers is
| one of precise communication skills. The most talented
| programmers (and the best product managers - which is maybe
| what he was getting at) are talented because of their ability
| to communicate precisely - currently, that means communicating
| between stakeholders and computers. In the future, it will mean
| communicating precisely between stakeholders and AI tools. But
| the fundamental demand for people with a knack for precise
| communication is not going anywhere.
| matwood wrote:
| > Carmack has a point, we should focus on the problem we're
| solving and the value we're delivering.
|
| When I was younger every programming job was supposedly about
| to be outsourced. I did exactly what Carmack suggested, and it
| worked out well for me. I think it's a great general strategy
| for working for a long time, but not ideal for someone who
| wanted to pass leetcode style interviews at a top tech company.
|
| For example, would anyone at my google tech interview care that
| I'm comfortable with a companies financials, managing P&Ls,
| margins, etc? No, just how fast I can write a sort or whatever.
| yanisneverlies wrote:
| I don't find such perspectives useful because they only consider
| two extremes: either we keep our jobs or become jobless.
|
| The fact is, AI is currently capable of replacing some jobs, and
| it will likely replace even more in the future. However, this
| does not mean that we will all become jobless. Instead, engineers
| will become more valuable as they are needed to support and
| develop these complex systems.
|
| Though, the amount of engineers will be reduced for sure.
| _-____-_ wrote:
| > Though, the amount of engineers will be reduced for sure.
|
| Why do you assume the demand for the output of engineers will
| remain constant? More likely it will continue growing (as it
| always has - "software is eating the world"), and engineers
| will be able to produce more output efficiently. This doesn't
| necessarily mean there will be fewer engineers.
| jstummbillig wrote:
| In the wake of the first IT job disrupting AI wave, who do you
| think will make up the core workforce in the creation of
| software, and why: The (today so-called) designers or the
| programmers? I have a strong sense that one is going to be much
| more effected than the others, but, interestingly, I have no clue
| which.
| Keyframe wrote:
| What's ahead is what happened in animation when computer assisted
| animation entered the scene. What about all the in-betweening
| jobs, inking, coloring..? Yeah, gone. However, most people can
| concentrate on posing and directing the action now and have
| computer handle everything in-between.
|
| So, hopefully, get on posing key features and concepts in your
| software and let the computer handle everything in-between. Until
| it becomes its own market, then we gone.
| dgudkov wrote:
| Of course CS jobs will exist. Who else will be fixing the bugs
| the AI generates?
| mfuzzey wrote:
| In the early 1980s there was a code generator program called "the
| last one", because it was supposed to be the last anyone would
| need. Didn't quite work out.
| https://en.wikipedia.org/wiki/The_Last_One_(software)
| amelius wrote:
| What got us by surprise, however, is that AI is better at soft
| skills (language, art) than at math.
| lexx wrote:
| Software development demands a very deep understanding of a
| company's business model and effective communication between a
| lot of people to get the final result right. Not only in terms of
| coding, but also in terms of strategy and architecture. AI can
| definitely help for quick prototyping, solution comparison,
| boring maintenance and stuff like that.
|
| But how can AI help build something that not a single person has
| the answer to what that is?
| cutler wrote:
| I predict Clojure and Ruby will experience a renaissance as they
| are the 2 most expressive languages and furthest removed from the
| machine. Seriously, though, won't low-level languages like C, C++
| and Rust be the first to become obsolete for everything outside
| AI itself? Isn't it easier for something like ChatGPT to produce
| code which is close to the metal? Maybe Larry Wall was more of a
| visionary than we give him credit for when I tried to design a
| language which was context-driven and as close to English as
| possible.
| ngcazz wrote:
| Unless you're already getting paid for delivering a Big Design Up
| Front, by a customer who thinks they know what the software needs
| to do.
|
| In that case there's zero incentive to place yourself in your
| user's shoes and work to mitigate those problems. You're a
| feature factory getting paid to reinforce existing workflows and
| paradigms, and you'd better not forget that as your efforts to
| recenter the conversation around user needs will be met with
| derision and you'll be seen as confrontational.
| marginalia_nu wrote:
| Sure, but that sort of thinking is a way to become more than a
| factory worker.
|
| Programming is magic. It lets you change how the world works.
| Never forget that.
| ngcazz wrote:
| That's a simple and yet super poignant point that I wish I'd
| made when I picked up and left that role :)
| CSMastermind wrote:
| There's a weird phenomenon I've seen in a few domains of prideful
| ignorance.
|
| Backend engineers who proudly don't know how to write frontend
| code and vice versa. Professional engineers who refuse to learn
| how to use modern IDEs and monitoring platforms. People who don't
| know how to quickly prototype software as if building something
| without complete rigor is beneath them. People who refuse to
| learn or work in certain programming languages they deem
| inferior.
|
| And rather than seeing this as a gap in their own skillset they
| think of it as a mark of intelligence or moral superiority.
|
| I suspect we'll see another divide around AI assisted coding with
| some engineers simply refusing to learn how to use the tools
| effectively to make themselves more productive as a point of
| pride.
| wiseowise wrote:
| > I suspect we'll see another divide around AI assisted coding
| with some engineers simply refusing to learn how to use the
| tools effectively to make themselves more productive as a point
| of pride.
|
| They will naturally fall behind their colleagues.
| bjornsing wrote:
| As an engineering manager I'm usually more concerned with those
| engineers that tend to spend all their time fiddling with IDEs
| etc, so that they will some day (that never comes) be soooo
| productive.
| awestroke wrote:
| If you were my manager I'd quit immediately, and I'm a top
| performer.
| latency-guy2 wrote:
| Don't worry, you're fired.
| mrits wrote:
| What is a modern IDE? It seems like the more recent trend has
| moved away from bloated IDEs and more to a bring your editor
| and own plugins environment. From my perspective IDEs lost and
| things are now closer to how Vim users have been coding for
| decades.
| krab wrote:
| It doesn't seem so to me. With LSP, those editors, including
| Vim, can be very large and featureful.
|
| A modern (or any) IDE, in my opinion, can debug your code,
| push parts of the editor into REPL (with debugging),
| understand code for things like autocomplete, linting, quick
| navigation and usage search. And yes, Vim can serve as a
| modern IDE if you spend enough time with it.
| mrits wrote:
| It seems like you start out by disagreeing and then just
| give a summary of why you don't need an IDE.
| krab wrote:
| I'm saying the trend (IMHO) is not away from a full blown
| IDE and that smaller editors with plugins and LSP can
| fill the the role as well, including the resource
| consumption.
|
| Vim can be very slow with a ton of plugins.
| mrits wrote:
| They not only can fill the role, they do fill it, and
| have been for longer and better than IDEs.
| joseph8th wrote:
| Emacs. :)
| debesyla wrote:
| > Professional engineers who refuse to learn how to use modern
| IDEs
|
| In my experience - I ignore the modern IDEs and systems purely
| because I don't have time to learn every new tool... :-D
| sgu999 wrote:
| In my (small) experience I've seen that only once, a perfect
| specimen who fits your description. Unfortunately this person
| was acting-CTO of a startup. They decided that for a consumer
| product very close to what a smartphone can do, they didn't
| need an OS. So they started writing their build system with
| make only, then an OS, an IP stack, then a GUI, then added
| support for multithreading,... Of course Android existed, that
| was the end of the 2010s. The retail price of a cheap (and
| faster!) Android phone was even lower than our BOM. 4 years and
| 20M later, the company went bankrupt without having delivered
| the product it promised, but _they_ had fun writing their
| "superior" software.
|
| As long as people are in charge, we'll have plenty willing to
| pay technical workers who see themselves as artists.
| robinsonb5 wrote:
| That sounds like a fantastic hobby project - but being able
| to tell the difference between a hobby project and a viable
| product in the marketplace is... maybe something AI can help
| with!? :P
| postsantum wrote:
| This could be Terry Davis if he hadn't had a mental illness
| wiseowise wrote:
| Terry actually delivered product.
| [deleted]
| pyuser583 wrote:
| Software might just be a tool, but it's a tool we fall in move
| with.
|
| I loved coding as a kid. It was so much fun.
|
| As a grownup, I loved learning Linux.
|
| I tolerated containers, dreaded Kubernetes, and am indifferent to
| AWS.
|
| But is that initial love that sucks you in.
| dagss wrote:
| I have seen cases of companies that focus on recruiting seniors
| that get a lot of product responsibility and can quickly find
| solutions since they see the customer/product view and also will
| have a good feeling for how to and in which order to best deliver
| things in code. So after talking to a customer about a problem,
| you just go and "talk" to the computer about the solution and get
| it out of the way quickly..
|
| I have also seen cases of companies where you have one PM, one PO
| and one Team Lead to manage a group of four developers. In that
| case developers are seen more as translators.
|
| My view of looking at this now is it is a bit like learning a
| language. Code is the tool you use to talk to a computer.
|
| If you need to close a deal in a country where English is not
| spoken, do you prefer to send a businessperson who knows the
| language, or do you send a businessperson + a translator?
|
| I much prefer companies where those who know how to code can
| still fill more of the product/business role than be seen as
| translators. However I realize it is hard to find people who know
| both and may be easier to recruit a combination business/product
| people who happen to not speak code, and translators who happen
| to speak code.
|
| This is perhaps also the open secret about startups: People
| speaking the language of computers without being limited by their
| role to act as translators.
| oulipo wrote:
| Of course what John says is true, it is important (and will
| always be) to understand how to build a good product, but the
| discussion about the future of work should also include a
| discussion about tax and redistribution, because we cannot let a
| few corporation take the riches from the rest of the world
| uxcolumbo wrote:
| Exactly. This is an important point to discuss and to solve for
| otherwise we'll end up in a world like shown in the movie
| Elysium.
|
| What will those people do, whose job will become obsolete? Are
| there support systems available to help people learn new
| valuable skills society needs?
|
| If only a few people at the top will benefit from these tech
| advancements and the rest will have to fight for scraps then
| society will eventually fail or end up in a total tech
| feudalistic system.
|
| Back in the 60s it was said that future advancements and
| automation will transform our society into a leisure and more
| innovative society.
|
| Whatever happened to that vision?
| href wrote:
| As long as there is creativity in programming, and I think there
| is a fair bit of that, AI is just going to be a tool.
|
| GPT-4 is great at sourcing human knowledge, but I think it can't
| really walk unbeaten paths. This is where humans shine.
|
| Case in point: I tried to ask the AI to come up with a new World
| War Z chapter set in Switzerland, and it was unable to produce
| anything original. I had to keep feeding it ideas, so it could
| add something new and interesting.
| auggierose wrote:
| Really smart people I know have no clue what code is. AI will
| make them code, too.
| noodles_nomore wrote:
| An average programmer's main job is to track down and fix bugs
| that shouldn't exist inside software that shouldn't exist build
| on frameworks that shouldn't exist for companies that shouldn't
| exist solving problems that shouldn't exist in industry niches
| that shouldn't exist. I'm 100% convinced that, if someone comes
| along and creates something that _actually_ obsoletes 95% of
| programming jobs, everyone would very quickly come to the
| conclusion that they don 't need it and it doesn't work anyway.
| ChatGTP wrote:
| It's pretty true, someone today on here wrote, "teach it to
| understand swagger", I actually laughed, like I've used swagger
| and it often turns into a Frankenstein, and sometimes for good
| reason. I completely understand the sentiment and I like
| swagger.
|
| I believe the world is wiggly, not geometrically perfect,
| intellectuals struggle with that because square problems are
| easier to solve. Ideal scenarios are predictable and it's what
| we like to think about.
|
| Have you ever had to use a sleep() intentionally just to get
| something shipped ? That's a wiggle.
|
| We're going to try square out the world so we can use ChatGPT
| to solve wiggly problems. It's going to be interesting.
|
| Yesterday I tried to use a SaaS product and due to some
| obscurity my account has issues and the API wouldn't work, they
| have a well specified API but it still didn't work out, I've
| been working with the support team to resolve it, but this is
| what I call a wiggle, they seem to exist everywhere.
|
| Ask a construction worker about them.
| asddubs wrote:
| >Have you ever had to use a sleep() intentionally just to get
| something shipped ?
|
| no, I'm not that deep in hell
| matwood wrote:
| > Ask a construction worker about them.
|
| Hah. So true. The more I work on renovating parts of my house
| the more I see where a workers experience kicked in the
| finagle something. Very analogous to programming. All the
| parts that fit together perfectly are already easy today.
| It's those bits that aren't square, but also need to fit
| where the 'art' comes in.
|
| Can AI also do that part? IDK, currently I believe it will
| simply help us do the art part much like the computer in Star
| Trek.
| ChatGTP wrote:
| I'm positive about it, there is a lot of repetition in
| coding and it's rare we get to spend the time on the good
| bits because of it.
|
| If we need a semi-intelligent system to help us with the
| copy pasta, so be it.
| kybernetikos wrote:
| Actually chat gpt is quite good at understanding some kinds
| of wiggliness. I built a restful api and documented it in a
| readme.md file in the wiggliest of ways. I then asked chatgpt
| to turn the readme into a swagger spec and then give me a
| page that read the spec and gave me a nice doc page with api
| exercise tool. Both tasks it performed really well and saved
| me a whole bunch of time.
| pcthrowaway wrote:
| I have some meticulous API docs I've written, which I tried
| to get ChatGPT to convert into swagger
|
| It failed spectacularly
|
| I wonder if it's because the API is quite large, and I had
| to paste in ~10 messages worth of API docs before I was
| finished.
|
| It kept repeating segments of the same routes/paths and
| wasn't able to provide anything cohesive or useful to me.
|
| Was your API pretty small? Or were your docs pretty
| concise?
| kybernetikos wrote:
| Chatgpt has a token limit. If you exceeded it then it
| would have no way of delivering a good result because it
| would simply have dirtied what you said at first. My api
| was not huge, about 8 endpoints.
| ilaksh wrote:
| It can accept about 4k tokens, maybe 3000 words or 3500.
|
| GPT-4 can now accept 8k or 32k. The 32k version is 8
| times larger than the one you tried.
|
| And these advances have come in a matter of a few months.
|
| Over the next several years we should expect at least
| one, quite easily two or more orders of magnitude
| improvements.
|
| I don't believe that this stuff can necessarily get a
| million times smarter. But 10 times? 100? In a few months
| the memory increased by a factor of 8.
|
| Pretty quickly we are going to get to the point where we
| have to question the wisdom of every advanced primate
| having a platoon of supergeniuses at their disposal.
|
| Probably as soon as the hardware scales out, or we get
| large scale memristor systems or whatever the next thing
| is which will be 1000 times more performant and
| efficient. Without exaggeration. Within about 10 years.
| ChatGTP wrote:
| So people want to build a nuclear reactor on the moon, I
| think these things should probably live on the moon or
| better yet Mars.
|
| That should be the place for experiments like this.
|
| Lowery latency links back to Earth and first see how it
| goes.
|
| Also you don't think there will be resource constraints
| at some stage ? It's funny we yelled at people for
| Bitcoin but when it's ChstGPT, it's fine to run probably
| tens of thousands of GPUs? In the middle of a climate
| crisis ? Not good.
| execveat wrote:
| Yeah, but now ask it to write a program that uses this API
| and then let it debug problems which arise from the swagger
| spec (or the backend) having bugs. I don't think LLMs have
| any way of recognizing and dealing with bad input data.
| That is I don't think they can recognize when something
| that is supposed to work in a particular way doesn't and
| fixing it is completely out of your reach, but you still
| need to get things working (by introducing workarounds).
| sebzim4500 wrote:
| Have you tried it? If you copy the errors back into the
| chat I could imagine it working quite well. Certainly you
| can give it contradictory instructions and it makes a
| decent effort at following them.
| execveat wrote:
| Yes, I'm subscribed to poe.com and am playing with all
| public models. They all suck at debugging issues with no
| known answers (I'm talking about typical problems every
| software developer, DevOps or infosec person solves every
| day).
|
| You need a real ability to reason and preserve context
| beyond inherent context window somehow (we humans do it
| by keeping notes, writing emails, and filing JIRA
| tickets). So while this doesn't require full AGI and some
| form of AI might be able to do it this century, it won't
| be LLMs.
| ilaksh wrote:
| If you think that the average public LLM is equivalent to
| ChatGPT or GPT-4 then you are completely mistaken. By a
| factor of say 500-10000%.
| lr4444lr wrote:
| Exactly this.
|
| Everyone thinks only in terms of current needs and state of
| affairs of people when analyzing a future technology. No one
| thinks about the insatiable human desire for more and the
| higher expectations for that new normal that always meets the
| increased productivity available. Anything that automatically
| solves much of our wants is doomed to be static and limited.
| TheLoafOfBread wrote:
| I am actually finding amusing that managers will generate 100k
| lines project with AI and then will start figuring out that it
| does not work as they want to. Then they figured out actual
| developers are needed to fix it, either in a very strict way
| telling AI what should happen (i.e. higher level programming)
| or directly fixing code generated by AI.
| wizofaus wrote:
| You're assuming that the AI is even generating anything that
| will make sense to a human. It seems inevitable we'll reach
| the point that for SaaS the AI will do everything directly
| based some internal model it has of what it believes the
| requirements are (e.g. it will be capable of acting just like
| a live web server), whereas for desktop and mobile apps,
| while that paradigm still remains relevant, it will generate
| the compiled package for distribution. And I imagine it would
| be unrealistic to attempt reverse engineering it. Fixing bugs
| will be done by telling the AI to refine its model.
| henry2023 wrote:
| At least permabans are going to be more fun
| zizee wrote:
| > what it believes the requirements are
|
| It will have to describe these requirements in a way that a
| human can understand, and verify. The language will have to
| be unambiguous and structured. A human will need to be able
| to read that language, build up a mental model, and
| understand it is correct, or know the way to make
| corrections. Who do you think that person will be? Hint: it
| will be a specialist that knows how to think in a
| structured, logical way.
| uh_uh wrote:
| > You're assuming that the AI is even generating anything
| that will make sense to a human.
|
| Why wouldn't it? It's trained on code generated by humans
| and already generates code that is more readable than the
| output of many humans me included.
| wizofaus wrote:
| But why would anyone bother with using AI to generate
| human readable code if the AI can generate the final
| desired behavior directly, either on-the-fly or as
| executable machine code?
| lelandbatey wrote:
| Because the AI's, at least right now, can't
| generate/change code so that it correctly does what's
| expected with the confidence intervals we expect. I've
| tried to get it to happen, and it just doesn't. As long
| as that's true, we'll need to _somehow_ get the
| correctness to where it needs to be, and that 's going to
| require a person.
| jiggawatts wrote:
| A lot of people have already figured out at some tricks
| to improving code generation.
|
| You can fairly easily update the "next token" choice with
| a syntax check filter. LLMs like ChatGPT provide a
| selection of "likely" options, not a single perfect
| choice. Simply filter the top-n recommendations
| mechanically for validity. This will improve output a
| lot.
|
| Similarly, backtracking can be used to fix larger
| semantic errors.
|
| Last but not least, any scenario where a test case is
| available can be utilised to automatically iterate the
| LLM over the same problem until it gets it right. For
| example, feed it compiler error messages until it fixes
| the remaining errors.
|
| This will guarantee output that compiles, but it may
| still be the wrong solution.
|
| As the LLMs get smarter they will do better. Also, they
| can be fine tuned for specific problems automatically
| because the labels are available! We can easily determine
| if a piece of code compiles, or if it makes a unit test
| pass.
| wizofaus wrote:
| Currently ChatGPT isn't, at least via public access,
| hooked up to a compiler or interpreter that it can use to
| feed the code it generates into and determine whether it
| executes as expected. That wouldn't even seem
| particularly difficult to do, and once it is, ChatGPT
| would literally be able to train itself how to get the
| desired result.
| raincole wrote:
| > It seems inevitable we'll reach the point that...
|
| It's inevitable that we'll reach AGI. It's inevitable that
| humans will extinct.
|
| Everything you described is not how today's AI works. It's
| not even a stretch, it's just pure sci-fi.
| wizofaus wrote:
| I'll be genuinely surprised if we don't have tools with
| that sort of capability within 10 years, quite possibly
| much sooner.
| camdenreslink wrote:
| I don't think it is inevitable we'll reach AGI. I think
| that question is very much up in the air at the moment.
| raincole wrote:
| My point is "It's inevitable that {scifi_scenario}"
| always sounds kinda plausible but doesnt necessarily
| means anything.
| omnicognate wrote:
| Are you arguing that LLMs already provide the technology to
| do this or are you arguing that it "seems inevitable" to
| you in the sense that somebody might think it "seems
| inevitable" that humans will some day travel to the stars,
| despite doing so requiring technological capabilities
| significantly beyond what we have yet developed?
| wizofaus wrote:
| It doesn't strike me as being much of a leap from what we
| have already, certainly not compared with traveling to
| the stars.
| pessimizer wrote:
| I know a small financial agency in the 00's that laid off
| their one-person IT department because they thought the
| computers would run themselves. It's honestly great that
| they're overselling AI, lots of messes to clean up.
|
| edit: Ultimately there are going to be iterative pipelines
| with traditional programmers in the loop rearranging things
| and reprompting. Math skills are going to be deemphasized a
| bit and domain skill value increased a bit. Also, I think
| there's going to be a rise in static analysis along with the
| new safe languages, giving us more tools to safely evaluate
| and clean up output.
| teaearlgraycold wrote:
| Ah, the old:
|
| "Everything's broken, why am I paying you?"
|
| "Everything works, why am I paying you?"
| mclightning wrote:
| Existential crisis averted by another existential crisis... :D
| edanm wrote:
| I highly disagree. That might ( _might_!) be true of some
| segments of the tech industry, like SV-based startups, creating
| products no one wants.
|
| But it's definitely not true of the _average_ piece of
| software. So much of the world around us runs on software and
| hardware that _somebody_ had to build. From your computer
| itself, to most software that people use on a day-to-day basis
| to do their jobs, to the cars we drive, to the control software
| on the elevators we ride, software is everywhere.
|
| There is a lot of waste in software, to be sure, but I really
| don't think the _average_ SE works for a company that shouldn
| 't exist.
| bjornsing wrote:
| I'm leaning in this direction too. I saw someone on Twitter
| phrase it quite well: "You can believe that most jobs are
| bullshit [jobs]. And you can believe that GPT-4 will completely
| disrupt the job market. But you can't believe both."
| mrob wrote:
| Bullshit jobs exist because upper management can't know
| exactly what everybody in the company is doing, which leaves
| opportunities for middle management to advance their own
| interests at the expense of the company as a whole. Upper
| management might suspect jobs are bullshit, but it's risky
| for them to fire people because the job might actually be
| important.
|
| But upper management can know exactly what LLMs are capable
| of, because they are products with fixed capabilities.
| ChatGPT is the same ChatGPT for everybody. This makes firing
| obsolete workers much safer.
| mrcartmeneses wrote:
| Hate to break it to you but upper management is usually the
| main driver of bullshit jobs. They know what's going on
| spunker540 wrote:
| Won't it find traction in bullshit jobs pretty easily?
| yobbo wrote:
| It's rather that the jobs (not the workers) are replaced in
| the way saddle makers were replaced by mechanics.
| kemiller wrote:
| He's right of course. But I can't deny that I like the way it
| works now and will miss it.
| m3kw9 wrote:
| If AI can be that good, it will just be a new level of software
| abstraction you have to learn, the demand for better software to
| serve our needs won't stop and we still need software people to
| "program" them the way we need it.
| wintorez wrote:
| Computer was a job title 50~60 years ago; then it became a thing.
| I have a feeling that Programmer is on the way to stopped being a
| job title.
| kypro wrote:
| Silly take honestly. I use this example a lot, but how exactly do
| self-checkout systems make cashiers more productive?
|
| There are tools which increase human productivity, while still
| requiring it (barcode scanners, for example). And then there are
| another class of tools which make human labour obsolete (self-
| checkout systems).
|
| LLM's (as they exist today) could be considered both. Github
| copilot would be an example of how GPT can be used as a
| productivity tool by human programmers, but as the technology
| progresses AI will become less of a "copilot", and will gradually
| replace humans as the main decision maker. Then eventually LLM
| will probably be used by people completely none technical to
| replace the need for coders entirely.
|
| Now the argument becomes, well this will open up new
| opportunities. Instead of being a programmer you can be be a user
| researcher on a project, which could be the case, but this is a
| much more nuanced argument.
|
| The most well paid jobs are typically those which require years
| of knowledge retention and require the human to basically serve
| as an advanced expert system in some domain.
|
| Both "good" and "bad" programmers can write code into a text
| editor, the difference is that the "good" programmer will make
| decisions backed up by years of professional experience - same
| for a good doctor or good lawyer. This is why we pay more for
| these professions, because that depth of knowledge is hard
| accumulate.
|
| This is the very thing that GPT attacks. What it can't replace is
| someone physically laying bricks or plumbing pipes. But there is
| less depth of knowledge required in jobs like this which limits
| salaries for these professions.
|
| So sure. Perhaps in the future someone can say, "hey, GPT, build
| me [x]", but just remember you won't be the only one who can do
| that and there is no significant depth of knowledge in such a
| job. So while GPT won't replace all jobs and may even create some
| new ones, expect it to replace or devalue the majority of "good"
| jobs like doctors, programmers, lawyers, designers, etc.
|
| So to Carmack's point, he's right you'll be able to build
| websites and apps faster using GPT as a tool, but you'll probably
| do so for a fraction of the salary.
|
| We've ran similar experiences over the last several decades with
| outsourcing. If your labour can be easily out sourced then your
| ability to retain a good salary drops. It's not that a worker
| today can't make clothes in the US faster and better than at any
| point in the past, it's that it makes no economic sense to do so.
|
| -----
|
| Another thing I'd note here is that I'm autistic as are a lot of
| programmers in my experience. My brain is built to do technical
| things and I struggle intensely with human interaction. In my
| opinion it's not that programmers "don't understand" software
| solves problems for people, it's that a lot of us don't naturally
| excel in those areas. I think it's fair to say a lot of like to
| stick to what we're good at, and that's generally writing code
| and designing complex systems. The more time I have to talk with
| users about their needs instead of doing technical work, the less
| useful I am. And I'm guessing designers are also people who want
| to design rather than type prompts into a chatbox.
|
| So another consequence here is that we might increasingly be
| forced to do jobs we don't really want to do as AI restricts the
| areas of labour where humans can still compete.
| sourcecodeplz wrote:
| I kind of see "full product skills" as building the product and
| making it successful, with all that that entails.
| osigurdson wrote:
| I'm personally experiencing a bit of a honeymoon-over moment with
| ChatGPT (even 4). It seems to be better in the exploratory phase
| of a project - show me something about x or y. However, I thought
| / hoped it would be better at doing things that I know how to do
| but don't feel like writing them or using a library (which then
| becomes a curation problem) since they should be < 50 lines of
| code.
|
| I really struggled with it for example to write a base62
| serializer (C#). It either came up with an O(N^2) solution,
| performed far too many allocations, missed edge cases and simply
| wrote incorrect code.
|
| This is just one example of ~50 lines of code that you would just
| like to work.
|
| In any case, I have felt in the past that LLMs could make me 10x
| more productive but today it feels more like 1.1x. I'm hopeful my
| disillusionment is temporary.
| cloudking wrote:
| What prompts did you try? This would be a good challenge for
| folks in the thread
___________________________________________________________________
(page generated 2023-03-19 23:01 UTC)