[HN Gopher] Deepmind's alphacode conquers coding, performing as ...
___________________________________________________________________
Deepmind's alphacode conquers coding, performing as well as humans
Author : Brajeshwar
Score : 247 points
Date : 2022-12-15 14:43 UTC (8 hours ago)
(HTM) web link (singularityhub.com)
(TXT) w3m dump (singularityhub.com)
| Konohamaru wrote:
| There's this paradox in AI known since the 80s (I think it's
| Moraevac's Paradox?) that the abilities we acquired latest in our
| evolutionary history (e.g. chess, language, algorithmic thinking)
| are the easiest for AI to automate. But the tasks that are deep
| in our evolutionary history (e.g. object recognition, navigating
| the environment, etc...) are extremely difficult.
|
| Some AI speculators, using Godelesque reasoning, posit that
| computer programming will be the last job to be automated by AI,
| so it can bootstrap itself. In reality, it will be one of the
| first jobs automated by AI, while jobs like cook, barista, etc...
| will be much later automated, if ever.
| Der_Einzige wrote:
| I just sat in a Burmese restaurant with an automated barista.
| Korea has been long ahead of us on automating food service too.
| Automatic 24/7 drink making machine hands are common in Seoul.
| Konohamaru wrote:
| Tasks that involve manipulating physical objects to maintain
| biological homeostasis (e.g. cook) are hyper-optimized by
| biological evolution. It is impossible for engineers to
| engineer a unit that performs better with the same energy
| restrictions humans have.
| Der_Einzige wrote:
| Good thing fusion actually is just 20 years away after all
| this time!
| Animats wrote:
| So when do we get a web site designer whee you write what you
| want and a web site is generated? That sort of thing is stylized
| and there's lots of training data available.
| mannykannot wrote:
| "AlphaCode tackles the problem by generating over a million
| potential solutions for a single problem--multitudes larger than
| previous AI attempts.
|
| "As a sanity check and to narrow the results down, _the AI runs
| candidate solves through simple test cases._ It then clusters
| similar ones so it nails down just one from each cluster to
| submit to the challenge. "
|
| The big question for me is, where did the test cases come from?
| They seem to be contributing much more to the outcome here than
| just "a sanity check and to narrow the results down." If they
| were effectively spelled out as part of the challenge, then this
| is an impressive result extending the trajectory of other recent
| achievements, but not so far as to justify the title's "conquers
| coding." If they were not, then this would seem to be taking
| things to a whole new level.
| sfink wrote:
| Our current "AI" does only interpolation, not extrapolation.
|
| You feed it a large input corpus and it digests it in clever
| ways. Then when you probe for something contained wholly within
| the space described by that corpus, it is amazingly good at
| fabricating something plausible to match the point in that space
| that you requested.
|
| Which covers a lot of stuff and is very useful, but does very
| little for problems that require extrapolation. It can't expand
| the edges, it can't come up with anything truly original. It
| can't solve problems that people haven't already solved and
| written down the solutions somewhere the AI could find them.
|
| Another way of saying it: AI today is much better at memory than
| thought.
|
| Career advice for young people today: specialize in pushing the
| edges, not in filling in details. There will be some areas where
| applying existing stuff will hold out and be useful for a long
| time, but you'll always be racing against the AI. Colonize the
| parts of problem space where AI doesn't have the imagination to
| go.
|
| (Of course, humans are notoriously bad at correctly recognizing
| what does or doesn't require originality. Hell, we think we're
| making decisions about what to do every minute of every day, when
| in fact we're just a bunch of dancing meat automata following
| ingrained patterns 99% of the time.)
| visarga wrote:
| When verification is possible, the interpolation machine starts
| to extrapolate.
|
| There is a process that generates ideas/solutions, and a
| process that tests them. An artist and a critic, a scientist
| and a lab. Together they form the experimentation loop.
|
| Let's take the game of Go for example. Testing who won a game
| is trivial. AlphaGo managed to beat humans in a few days of
| self-training. In other words, those edges you speak about can
| be pushed with massive search and verification.
|
| There is no reason we can't do massive search + verification
| for math and code. This is a good way to create training data
| where it doesn't exist in sufficient quantity.
|
| Other things can be simulated with expensive computation, and
| then "distilled" into fast neural networks. Then we apply the
| neural net to fast-search solutions. In the end we need to
| verify some of them (thinking of weather simulations, new
| materials, new drugs, ...)
|
| Also reminded of the recent AlphaTensor who leveraged massive
| learning from verification to beat Strassen's algorithm who was
| state of the art for 50 years. The is no reason neural nets
| should remain purely interpolative if we can manufacture good
| training data by running computation or experiments.
|
| Ideas are cheap, verification matters. Generative outputs are
| worthless without verification.
| tbalsam wrote:
| Er....this feels like every main criticism going back to even
| the expert systems pre-90's.
|
| Could you please provide some support for your argument? This
| was repeated a lot in the early days of the modern wave
| (2012-2016) but was pretty thoroughly debunked as we've
| explored generalization and how these models disentangle
| intrinsic concepts and compute with them. Heck, even modern
| transformers are restricted memory Turing complete and use that
| to their advantage.
|
| Also, I do not mean to be rude, but frankly saying "colonize
| the parts of the problem space where AI doesn't have the
| imagination to go" is frankly rather terrible as it leans on
| the imagination argument of AI. At this point, being adaptable
| to co-integrate will be good, otherwise I could see people
| following that stuck inside of some kind of Sisyphean pseuso-
| Luddite escapist nightmare.
|
| Source for opinions: have been involved in ML in some form for
| most of the modern wave, and am appropriately (quite) skeptical
| about the AI takeover/revolution/eventual singularity
| belief/etc.
| MetaWhirledPeas wrote:
| It can write an original poem using unique prompts I give it.
| Are you suggesting a similar poem exists somewhere else? I
| think not.
|
| It cobbles together examples, sure, but that's exactly what we
| do too. Everything "original" we make is riddled with
| subconscious outside influences.
| klabb3 wrote:
| Like other responders, I think your take is oversimplifying.
| It's hard to both classify what imagination is, how to judge
| whether AI has it, and we also have humans tuning these models,
| which may or may not contribute to its blandness.
|
| That said, I think it's still fair to say that most
| interpolation type of work is looking very threatened by AI as
| it is today, so the converse (don't go into those fields) is
| probably sound advice in light of current developments. As for
| the rest, I suspect we'll be forced to chisel off piece by
| piece from our zeitgeist of "imagination". Some pieces will
| fall off quickly, as AIs replace them, and some will take
| longer, perhaps a lot longer.
|
| I do find it interesting that computers ended up killing it in
| unpredictable domains such as style transfer and NLP, while
| being mediocre-at-best in eg humor. It may mean that we have
| over- and underestimated aspects of what traits are unique and
| sophisticated.
| angarg12 wrote:
| I've conducted hundreds of FAANG-level coding interviews, and
| recently tried to run my interview questions through ChatGPT. The
| model was able to spit out a correct, optimal, clean Python
| solution effortlessly.
|
| Interestingly when I asked a follow up, a harder version of the
| same problem, ChatGPT spits out code that sort-of looks correct,
| but is actually nonsense.
|
| So, would it pass the interview? That's difficult to say.
| Interviews don't happen in a vacuum, you also consider the
| candidate's thought process, explanations, alternatives,
| tradeoffs...
|
| Still, I see this is a game changer for this kind of interviews.
| So long as candidates understand and explain the output code,
| there is a good chance they would clear the interview. Even if
| the code is incorrect, it might given them some hints towards the
| right solution.
|
| So where do we go from here? I always loathed this interview
| format, and these languages model reinforce it even further.
| Interview cheating has always been there, but it is generally so
| rare that it isn't a real concern. However these tools are too
| effective and easy to use. I can see a real divide between people
| who use them and people who doesn't. This type of interview might
| become even more useless at telling good coders apart.
|
| My take is we have 3 choices:
|
| a) Ignore it.
|
| b) Try to fight it.
|
| c) Embrace it.
|
| a) is not an option. b) would make an already pretty dreadful
| process even more intolerable. My money is on c)
|
| I can envision an interview format where we allow, or even
| encourage people to use ChatGPT and AlphaCode during the
| interview, much like you would use your IDE or a search engine.
| In fact seeing how a candidate understands and uses those code
| snippets can be a very interesting data point.
|
| Either that or scrap leetcode-style interviews altogether.
|
| P.S.: I was thinking about writing a blog post about this, if
| people think it'd be interesting.
| LawTalkingGuy wrote:
| I get the same results from asking for code. Near 100% success
| with an initial demo, and then pretty tragic performance when
| making a series of changes to actually fit the concept. I got
| it to write 3d rotation code, and to switch to using rotors
| instead of quaternions but when I asked it to rewrite the
| combine_rotors() method it rewrote it to combine Enigma rotors.
| I explained its error and it apologized but just wasn't able to
| go back to the original code and work with it anymore.
|
| Interviewing is going to be shaken up but I think some methods
| are more timeless. I've been giving the subject code instead of
| asking them to write it for a while now. Largely because I
| wanted to talk more and watch them type less.
|
| We start by discussing the coding challenge guidelines and I
| leave them vague. They need to understand the goal and what
| I've left out and suggest those guidelines themselves. Once we
| agree though, I give them the code. "Here's what's running
| now."
|
| Then I update the goal and we discuss changes. I get them to
| "whiteboard" certain things, like what a query looks like with
| their proposed changes or whatever, and we discuss big-O, etc.
| This, imho, is how whiteboarding is actually used - not to
| write whole programs but to provide examples and pick them
| apart.
|
| I feel that this would work even if they were using an AI in
| another window. We're trying to select for developers with
| common sense and domain knowledge, and who can clearly discuss
| engineering tradeoffs. Actually making the changes (the coding
| itself) was a big part of the job and that's decreasing, but
| imho all the other requirements remain. They'll still need to
| know how to handle the issues I talked about above, of trying
| to get the model to write the right code!
| woeirua wrote:
| We're just going to move to on-site interviews only. ChatGPT
| can't enter the equation then.
| amildie wrote:
| >ChatGPT can't enter the equation then.
|
| In a few years, ChatGPT will be _conducting_ these
| interviews.
| woeirua wrote:
| LMAO, no. What would prevent someone from just looking up
| the answers using ChatGPT on another machine?
| fourstar wrote:
| Neuralink blocks your path.
| feet wrote:
| Neuralink is not realistic in its current conception, they
| won't easily solve the problem of input to a brain
| woeirua wrote:
| Neuralink doesn't work.
| [deleted]
| exceptione wrote:
| I would expect A, because it is an option.
|
| Like in mathematics or other tests students are not allowed to
| use the internet or an advanced calculator in order to test
| wether they truly comprehend the stuff.
|
| If you think that your FAANG-interviews are any good, then just
| keep them in the format you already have, by making sure
| applicants cannot use AI during the test. I would have on-sites
| with pen+paper, whiteboard or a prepped/supervised machine,
| whatever.
|
| Of course, applicants could use AI to train for the interview,
| but that is not a problem, as long as you test their
| comprehension.
| angarg12 wrote:
| I'm not a fan of making the process more dreadful than it
| already is. Forcing people to go to an office and use pen and
| paper or a whiteboard would do exactly that.
|
| I hope companies (including mine!) see the writing on the
| wall and stop trying to fight the future.
|
| This isn't accounting for even just pretending we are testing
| candidates on anything remotely indicative of on-the-job
| performance.
| AnimalMuppet wrote:
| Would training with such an AI _help_ their comprehension? Or
| _hurt_ it?
|
| My money is on "hurt".
| serjester wrote:
| Given the trends towards remote tech jobs, it could be
| difficult to convince your companies engineers to show up to
| the office for what's usually just a weed out leet code
| interview.
| mywittyname wrote:
| Proctored tests are common for certification exams. Perhaps
| there's a future where candidates go to one of these exam
| centers for one of their technical interviews.
| antipotoad wrote:
| Or just a live (remote) programming interview, with no
| use of Codex or Copilot.
| ren_engineer wrote:
| Algorithm interview and competitive programming type questions
| are probably some of the easiest for GPT to solve because there
| are a massive number of problems and solutions publicly
| available for training.
|
| The real benefit of AI is somewhat shown in this paper, it
| effectively solved the problems through brute force generating
| millions of possible solutions. For real world problems it
| would be interesting to let GPT generate a bunch of different
| solutions and push them into a test environment and see which
| works best.
|
| The biggest problem I see is black swan events where AI coded
| systems work great until something goes wrong and no human
| truly knows how all the pieces fit together.
| bcrosby95 wrote:
| If a company allows ChatGPT to be used on the job, anything but
| c) seems foolish. If the company doesn't allow it, c) seems
| like a bad idea.
| jgilias wrote:
| Your option d) that you don't even enumerate (scrap leetcode)
| would be best.
| gautamcgoel wrote:
| I expect that the reason it gave the correct answer to your
| first question is simply that it already saw the problem and
| memorized the solution - there is some empirical evidence that
| deep neural networks are able to memorize much of their
| training data.
| angarg12 wrote:
| Possibly. The problem itself is not in leetcode, but it
| almost certainly has been leaked somewhere. However the
| program was able to make a few changes to the code with some
| prompting, which hints a little bit more smarts than just
| regurgitating an answer.
|
| Nevertheless the point is moot. I've invented completely
| novel questions (promise!), and saw them leaked online after
| asking them twice. The process is fundamentally flawed and
| large language models are just making that glaringly obvious.
| neilv wrote:
| > _I 've conducted hundreds of FAANG-level coding interviews,_
| [...] I always loathed this interview format,*
|
| That's impressive perseverance. Did that wear on you?
| angarg12 wrote:
| I hate leetcode style interviews, but I've done hundreds of
| them. The irony is not lost on me. Some might say I'm part of
| the problem.
|
| Thing is, in a small way, I'm trying to change things from
| within. I try to make the process as palatable and fair to
| candidates as possible, while working within the constraints
| of the system. When I train new interviewers some of my top
| tips are:
|
| a) The purpose of the interview is to determine whether the
| candidate is a good fit for the company, and the company is a
| good fit for the candidate. b) The interview is an imperfect
| proxy for this.
|
| It then follows that interviews shouldn't overindex in the
| coding round. If I have helped someone to get an offer that
| wouldn't otherwise, I'm satisfied.
|
| BTW that's also one of the reasons I'm publishing this. I
| hope to push the point that leetcode-style coding interviews
| are outdated and should be burnt to the ground.
| nsxwolf wrote:
| Myself, I've only conducted dozens not hundreds of FAANG-
| style coding interviews. I also loathe the format, and it
| does indeed wear on me. I hate every second.
| time_to_smile wrote:
| The real issue with any of these AI innovations is they really
| point out places where humans have already started to behave
| like an AI in the first place.
|
| Ever since the emergence of leetcode style interviews I've been
| shocked at how many people can reproduce leetcode examples, but
| still fundamentally have no sense of algorithm design outside
| of the context of a job interview.
|
| Programmers with their sights set on acing a FAANG interview
| will just keep repeating leetcode problems until they start to
| memorize the common patterns (not the problems themselves of
| course, but the structure of these type of problems). What's
| disturbing to me is that I recall far more interesting
| discussion about algorithms in the era before leetcode
| dominated everything.
|
| The common solution isn't to understand algorithms better, but
| to become a leetcode solving robot.
|
| So it's no surprise to me that AI can pretty easily replicate
| humans that have tried to turn themselves into robots.
|
| We see similar patterns in the art that AI can create. It's
| very good at replicating a kind of art style of designers
| trying to turn themselves into design robots.
| codekilla wrote:
| Absolutely spot on. I actually do algorithm design, usually
| over a period of weeks (at least), and leet code is a joke
| for the serious algorist (I'm sure I'd fail an interview
| based on it). Nothing has so clearly illustrated the robotic
| nature of the leet code expert quite like this result has.
| savingsPossible wrote:
| Do tell!
|
| What sort of job leads you do design algorithms?
| codekilla wrote:
| Mathematical Biology/Bioinformatics. We have to think
| _very_ carefully about every step in the process of
| extracting information from large, diverse datasets--
| often writing things from scratch, combining
| /transforming things in novel ways, and implementing new
| mathematical ideas efficiently enough to be computable on
| large datasets.
| andrekandre wrote:
| > So it's no surprise to me that AI can pretty easily
| replicate humans that have tried to turn themselves into
| robots.
|
| this is such a great insight... i feel like it could even
| somehow explain a lot of politics and many other phenomena.
| eulers_secret wrote:
| > Interview cheating has always been there, but it is generally
| so rare that it isn't a real concern.
|
| _You 're_ the person they're trying to fool, so if they cheat
| successfully you would never know. You've never seen overt
| displays of bad cheating, which is different.
|
| Think you haven't seen a stick insect in years? Likely you
| have, but just didn't _notice_ it...
|
| Cheaters will always be more motivated than those trying to
| detect them - because _everything_ is on the line for them.
| sebzim4500 wrote:
| >You're the person they're trying to fool, so if they cheat
| successfully you would never know. You've never seen overt
| displays of bad cheating, which is different.
|
| I don't think it's widespread at least, since in my
| experience people that do well in technical zoom interviews
| do not drastically decrease in apparent competence when we
| move to in person rounds.
|
| People definitely get told interview questions by recruiters
| though, if you count that as cheating then it is everywhere.
| deegles wrote:
| > c) Embrace it
|
| asking "here's a chatgpt solution to the problem... what's
| wrong with it?" would be a solid process imo.
| kristiandupont wrote:
| Well, if they still have access to ChatGPT, they can ask that
| of it as well. It may or may not give a valid answer, just as
| it does with code.
| mike_hearn wrote:
| A big if. A lot of ChatGPT discussions seem to take for
| granted that it'll always be available/free/priced low
| enough that ~everyone has access to it. Seems more likely
| that at some point OpenAI will close it up and put it back
| behind an API.
| ilaksh wrote:
| You can use text-davinci-003 from the API now and it
| works better for many things.
| NegativeLatency wrote:
| This would be way more fun
| Kinrany wrote:
| And you can give every candidate a new version of it.
| angarg12 wrote:
| I love it!
|
| Particularly, ChatGPT can give apparently correct solutions
| that are wrong in subtle ways.
|
| Also reading, understanding, reasoning about, and fixing code
| other's wrote is way closer to on-the-job performance.
| red_admiral wrote:
| By the time someone gets an on-premises interview, you'd notice
| if they're using a bot or not? Or are all the interview rounds
| remote these days?
| angarg12 wrote:
| We've been doing remote-only interviews for years now.
| sgerenser wrote:
| In my experience (with FAANGs at least) the interviews have
| been virtual since the pandemic began. Haven't heard of any
| plans to return to return to flying out candidates for in-
| person interviews.
| philjohn wrote:
| That seems like it's likely been trained on various examples of
| FAANG questions that are posted to the likes of leetcode, with
| solutions often presented. The push for harder version was
| clever, and that it fell over is no real surprise.
| ilaksh wrote:
| By the way ChatGPT is not even the best model OpenAI has for
| writing code.
| busyant wrote:
| i think you're correct about option c, primarily because we
| could never force everyone to adopt the other 2 options.
|
| option c makes me nervous, but right now I can't see an ai
| correctly dealing with the ambiguity of the data sets i
| typically look at.i do a lot of "asking for clarification."
| spuz wrote:
| Our coding exercise cannot be solved by ChatGPT and I believe
| is much more effective at evaluating coding ability than
| leetcode-style questions. We ask candidates to design an
| object-oriented booking system which requires 3 or 4 different
| classes to implement. ChatGPT cannot easily do this without
| heavy prompting from the user at which point, they'd be better
| off just writing their design down themselves. We want to
| evaluate candidates based on the kind of work they will
| _actually_ be doing - not brainteasing O(n) solutions to
| contrived algorithm questions and so far it 's worked very
| well.
| MetaWhirledPeas wrote:
| > cannot be solved by ChatGPT
|
| That's great! For now. But tomorrow's coming fast.
| angarg12 wrote:
| I agree. Our interview has 3 coding rounds, and one of them I
| call "clean code". For that I ask a straightforward question
| that requires candidate to define some APIs and write some
| classes / functions. Then I ask several follow ups, adding or
| changing requirements.
|
| This is by far my favourite question, the one closer to on-
| the-job coding. It also lends itself well to deep
| conversations with candidates.
|
| But alas I don't own the process and still have to work
| within the parameters of the company. Whenever possible I ask
| this kind of questions, but other interviewers will default
| to leetcode-style rounds.
| malandrew wrote:
| This is why I ask questions in a business domain. It requires
| the programmer to think not only about solving a
| straightforward clear problem and only worry about Big-O.
| Instead they need to figure out the problem by asking
| thoughtful questions about possible business concerns and think
| about which ones to optimize for.
|
| Ambiguity that requires follow up questions for successful
| isn't going to be addressed by something focused on solving a
| problem that "thinks" it has all the information to solve the
| problem.
| zamalek wrote:
| > c) Embrace it. [...] My money is on c)
|
| Agreed.
|
| I think developers who don't will be rare in a few years time.
| Just like developers who primarily rely on assemblers (versus
| compilers) have basically become extinct. Like we sometimes,
| _very rarely,_ need to inline some assembly, we will sometimes
| need to use the ol ' gray matter to figure out a novel
| algorithm or something.
|
| I believe that avoiding _learning_ these tools could be a
| existential issue for your present-day job. You don 't have to
| come to depend on them, or use them daily, but you do need to
| understand how best to use (and not use) them.
| [deleted]
| polotics wrote:
| "When pitted against over 5,000 human participants, the AI
| outperformed about 45 percent...". So 55 percent of humans
| outperformed the AI. How motivated were the humans in the sample?
| How many tests were run with these 5000 persons?
| chakintosh wrote:
| > Rather than copying and pasting sections of previous training
| code, AlphaCode came up with clever snippets without copying
| large chunks of code or logic in its "reading material."
|
| Gave me a good chuckle lol
| spaceman_2020 wrote:
| We're not far from coding becoming just a hobby or a sport - like
| chess.
| solumunus wrote:
| We're nowhere even remotely close to that. I have to assume
| that anyone reacting on this level are in the same group of
| people who 10 years ago believed we would have fully self
| driving cars by now.
| spaceman_2020 wrote:
| I don't know - chatGPT is the first time that I've
| consistently used an AI product in my workflow, to the point
| that when the site was rate limited today, I felt a little
| paralyzed. Googling for answers on Stackoverflow or looking
| up documentation felt distinctively primitive.
|
| I've never felt that way about any tool.
| Workaccount2 wrote:
| In 2014 there were AI experts on record saying that an AI
| beating a top GO champion was at least 15 years out.
| wittycardio wrote:
| Yup our current version of AI is great for cool demos and
| very bad for real world use cases.
| btilly wrote:
| I am dubious about the stated result.
|
| Language models have been very good at spitting out chunks of
| text verbatim that were in their training data. Buried in the
| github training data will belots of examples where people post
| their solutions to fun problems..including past code
| competitions.
|
| How often is it managing to match a description of a repository
| and spitting out correct code that cribbed heavily from it? That
| is, instead of figuring out the problem, it is pattern matching
| to a solution that someone else already figured out?
|
| How would we know?
| antipotoad wrote:
| I think the much better blog post directly from DeepMind might
| answer your questions [0], but suffice it to say, the model
| does seem to be solving novel problems.
|
| [0]: https://www.deepmind.com/blog/competitive-programming-
| with-a...
| amag wrote:
| We software engineers have to be the stupidest "smart" people on
| the planet. No other occupations work so hard to destroy entire
| businesses including our own. I get it, I'm a software engineer
| who loves automation.
|
| "But AI will just be a tool in our tool-set, software engineers
| will still be the system architects."
|
| Sure, for a while and then AI will do that too.
|
| "But eventually we will live in a fully automated world in
| abundance, wouldn't that be great?"
|
| Doing what? When we get there, anything we can consider doing, an
| AI can do faster and better. Write a poem? Write a book? Write
| music? Paint a picture? Life will be like a computer game with
| cheat-codes, whenever we struggle with something, instead of
| fighting on and improving we will turn to our universal cheat-
| engine: AI.
|
| Anecdotally, I did an analog mistake in my early twenties when I
| wrote a cheat-program for save-files. It worked like a typical
| cheat-engine, search the save-file for a specific value, go back
| to the game and change that value, go back to the save and search
| for the new value but only in those locations that had the
| original value. This is how I ruined "Heroes of Might and Magic
| II" :(. I used to love that game. I could spend hours playing it.
| Writing the cheat program was a lot of fun for a couple of hours
| but when it was done, there was no longer any reason for me to
| play the game. You might say that I didn't _need_ to use my cheat
| program, but once the genie was out of the box it was too
| tempting to resist when I met some obstacle in the game.
|
| This is what I fear about AI making our jobs superfluous; it will
| also make our hobbies or anything we enjoy doing superfluous.
|
| Sorry for the bleak comment but this my fear and I feel the genie
| is already out of the box.
| paxys wrote:
| I can't imagine the amount of corporate brainwashing needed to
| get to this level of thinking. Are you really saying that
| people cannot have any identity in life, any dreams, hobbies or
| pursuits if they can't sit in front of a computer for 8 hours a
| day, 5 days a week fixing Jira tickets?
|
| Just because a car can go 100mph doesn't mean long distance
| running doesn't need to exist. Just because a novel you write
| isn't the best in the world doesn't mean the hobby is
| pointless. Go buy a farm and grow your own food. Keep some
| pets. Build cool software just because you can. Hang out with
| your friends. Play with your kids. Do literally anything you
| want. Not having to be a wage slave to survive is a _good
| thing_ for humanity.
| slt2021 wrote:
| profits (benefits of overall automation) will be reaped by
| capitalists (employers, capital owners), not the general
| public.
|
| plus, before we automate anything civilian, we will have to
| automate everything military, cause they get the first dabs
| at any emerging tech.
|
| more likely we will see global war between stealthy
| autonomous robots much earlier, before we automate much on
| the civilian side
| amag wrote:
| Thank you for the ad hominem attack. It's always a pleasure
| to discuss things with people who like to jump to conclusions
| and assume things about people they don't know anything
| about.
|
| Personally I prefer not to pass judgement on a person based
| on the very little knowledge that can be gleamed from a post
| like this. But maybe, just maybe if you actually read what I
| have written (in other comments as well) things might clear
| up for you.
| acuozzo wrote:
| > Not having to be a wage slave to survive is a good thing
| for humanity.
|
| I mean, sure, that's one of the two paths discussed in
| "Manna" by Marshall Brain. Within the book it's called "The
| Australia Project"; a kind of utopia.
|
| Myself and the OP are more worried about the other path: a
| dystopia in which the majority of people are forced into
| something much worse than wage slavery by those in control of
| the thinking machines. A dystopia not unlike the one that led
| to the "Great Revolt" in the Dune series.
| notpachet wrote:
| > A dystopia not unlike the one that led to the "Great
| Revolt" in the Dune series.
|
| I'm glad you brought this up; I've found the term
| 'Butlerian Jihad' coming increasingly to mind when I read
| AI threads on HN. It's interesting to think about a future
| where we potentially put prohibitions on the use of AI for
| moral reasons.
|
| https://dune.fandom.com/wiki/Butlerian_Jihad
| aquaduck wrote:
| > Myself and the OP are more worried about the other path:
| a dystopia in which the majority of people are forced into
| something much worse than wage slavery by those in control
| of the thinking machines.
|
| My fear is that nobody will remain in control of the
| thinking machines. Imagine an AI agent for hire which
| maintains its own cryptocurrency accounts and pays its own
| cloud hosting bills. That's the future I'm worried about.
| throw827474737 wrote:
| > I can't imagine the amount of corporate brainwashing needed
| to get to this level of thinking.... Not having to be a wage
| slave to survive is a good thing for humanity.
|
| Corporate brainwashing, why? That is just realistic. I mean
| we know earlier people with much harder lifes actually had
| more free leisure time.. and even Ford imagined with all the
| automation we may be able to work much less and have better
| lifes.. still here we are: A few people making tons of money,
| some soing very good to okayish, but the vast majority doing
| 2-3 low paying crap jobs to survive.. and we all even workong
| more than decades ago. How?
| secondcoming wrote:
| People have bills to pay, or will AI do that too?
| thedorkknight wrote:
| My hobbies are exercise/sports, playing video games, reading
| books, building models, and hanging out with friends. I have
| zero fear of any of those being automated away
| scottyah wrote:
| The ability to afford to do those activities could be lost.
| Automation pulls the value previously created by many humans
| and concentrates it to those who create/maintain the system.
| As that list of needed people shrinks, so too does the
| probability of a person being able to create meaningful
| value.
| eikenberry wrote:
| > This is what I fear about AI making our jobs superfluous; it
| will also make our hobbies or anything we enjoy doing
| superfluous.
|
| This is just silly on 2 levels. First is that programming is
| fun due to the artistic/creative nature of it. It's not what I
| program that matters, it's how I program it. No way an AI will
| replace the fun of thinking about code and then materializing
| that vision.
|
| Second is that once an AI is good enough to write software
| better than us, IE. rewrite itself better, then we have reached
| a form of the singularity and all bets are off.
| CGamesPlay wrote:
| > Doing what? When we get there, anything we can consider
| doing, an AI can do faster and better. Write a poem? Write a
| book? Write music? Paint a picture? Life will be like a
| computer game with cheat-codes, whenever we struggle with
| something, instead of fighting on and improving we will turn to
| our universal cheat-engine: AI.
|
| For literally anything in my life that I can do, there is
| already someone who can do it better and faster than me. I
| still enjoy doing the things that I do, and why would that
| change?
| bruce343434 wrote:
| Because now that person works for you, for free, instantly,
| anywhere, anytime. There's at least a temptation.
| amag wrote:
| This! Perfectly stated!
| nulld3v wrote:
| I can execute a TAS speed run of any game I want in a
| couple clicks. So why do speedrunners still exist then if
| they can never hope to match TAS?
|
| I can open Stockfish and absolutely destroy any human I
| want in chess. So why do people still play chess then?
|
| I can get ChatGPT to write a good response to your comment
| in mere moments. So why am I still typing?
| moffkalast wrote:
| > So why am I still typing?
|
| So you don't have to give OpenAI your phone number, hah.
| tintor wrote:
| "So why am I still typing?"
|
| Because ChatGPT is currently overloaded by users.
| GartzenDeHaes wrote:
| A few months after being defeated by AlphaGo, Lee Sedol
| retired from profession Go. He said, "Even if I become
| the number one, there is an entity that cannot be
| defeated." And, "As a professional Go player, I never
| want to play this kind of match again."
|
| https://en.wikipedia.org/wiki/Lee_Sedol
| ALittleLight wrote:
| For myself, I don't like to play online Go against
| humans, because I hate to lose. Instead, I play against a
| computer with a difficulty rating of around 1-3 dan, and
| whenever things start going badly for me I just undo
| moves and try again. Sometimes I ask the computer for
| advice on what move to make. My experience of Go is that
| I always win against a player who is much better than me.
| I find it pretty satisfying.
| bavila wrote:
| That seems more like a testament to the intensity of
| someone who does X to be the "best" at it, rather than
| someone who does X for the fun of it. Go became more than
| just a game to him -- it was his identity.
| guerrilla wrote:
| > for free
|
| That seems very naive.
| chrisbaker98 wrote:
| Who says that person will work for _you_ , or that it will
| be free?
| soperj wrote:
| They're talking about the AI.
| space_fountain wrote:
| I'm not sure if this is what the commenter meant, but I
| am vanishingly unlikely to own the AI. Even if I write
| the AI I'm unlikely to own it. The training costs are too
| large and even if I did train a working model, AI can be
| duplicated, and there's little reason to use anything but
| the best. The scary thing about AI to me is finally
| turning intellectual labor into a pure process of
| capital. You put more energy and capital in, you get more
| out, no need or room for humans anywhere in that loop.
| Now of course we're a long way away from that. There will
| be room for humans for a long time, but it's scary how
| much additional power it will give to capital. How much
| less the interests of ordinary people will mater
| jimbokun wrote:
| > Now of course we're a long way away from that.
|
| I am no longer confident of that.
| visarga wrote:
| That's actually a great thing in the long run. Money will
| be spent on compute, compute will generate data by
| generative models + validation models, then data will be
| compacted into a new model. Models and data can be
| copied, money can't.
|
| If it works with just electricity and doesn't require
| manual human work it is a game changer. No longer limited
| by human resources, we can scale research in any field
| and improve everyone's life much faster.
|
| AlphaGo is an example of such an approach. I don't think
| models will be locked down, they will probably be like go
| bots in recent years, about 50% of them open sourced. As
| long as there is an open dataset, the models can be
| replicated.
| Induane wrote:
| The fact that something can do something better than me doesn't
| make much difference. The problem there is the comparison. I
| like doing things for their own sake; I like knowing things and
| the process of attaining said knowledge. Making something is
| satisfying in a way that buying it isn't. Why do people make
| their own furniture? Why do people restore old cars? Why do
| people do pretty much anything? Most of what we (on average) do
| for work is kind of meaningless in some sense - it's an ends to
| a means. And yet, sans that, people still DO things.
|
| Because we want to. Because we desire to.
|
| I'd rather have MORE time to spend doing the pointless things I
| _LIKE AND ENJOY_ doing than MORE time doing somewhat pointless
| things for companies.
| kmonsen wrote:
| Software engineers are not a group, there was no way "we" could
| decide not to do this.
| nnoitra wrote:
| A4ET8a8uTh0 wrote:
| Believe it or not, I don't think it is that bleak. The posts I
| seem to see on Linkedin that touch this subject seem to be
| reminiscent of Tesla hype ( "soon you won't even need
| mechanics!" type of predictions ). It is definitely a different
| breed from the usual crop of 'no code' tools, but the
| similarities are really hard to ignore for me ( hype, no
| understanding of that blackbox does, and reality that things
| have to work-- and that someone has to actually understand how
| it works ).
|
| In other words, I honestly don't think AI ( in its current
| state at least ) will change much. I will go even as far as to
| say that I don't see current generation being able to create an
| appropriate prompt.
|
| I might be a little optimistic here, but having seen how people
| normally react to 'easier' things kinda confirms it.
|
| If I worry about anything here, is that AI will become THE
| answer that you will not be allowed to question.
|
| edit: clarified blackbox statement
| amag wrote:
| Thanks for this optimistic post and to be clear I don't
| expect alphacode to put me out of a job. I've seen the
| hilarious examples where people get ChatGPT to for instance
| claim that abacus-based computing is faster than GPU-based
| computing.
|
| We're far from there with AI yet but we're on a trajectory
| and that's what worries me.
|
| But in the short term I definitely agree with you.
| claytongulick wrote:
| I know that this isn't your intent, but I feel like this can be
| boiled down to the buggy-whip argument [1].
|
| People have reasonable fears of technology disruption, but they
| tend to follow the same trajectory -
|
| 1) innovation
|
| 2) economic upheaval
|
| 3) new undiscovered problems arise
|
| 4) new industries develop to solve those new problems
|
| 5) humanity gets better
|
| [1] https://abcnews.go.com/Business/story?id=5508260&page=1
| ren_engineer wrote:
| I think we are hitting the flywheel stage where AI is going to
| be able to start improving its own architecture and
| performance. Biggest thing holding AI back right now is
| performance efficiency. Will be interesting to see how AI is
| used to come up with chip designs and maybe even improvements
| on its own model. AI can effectively bootstrap itself and
| improve itself so it can then improve itself again
| godshatter wrote:
| > AI can effectively bootstrap itself and improve itself so
| it can then improve itself again
|
| Does this not frighten anyone else? Or am I alone in this?
| status200 wrote:
| It has been pondered on and raised terror under the name
| "intelligence explosion"
|
| https://en.wikipedia.org/wiki/Technological_singularity#Int
| e...
| jimbokun wrote:
| Terrified.
| ren_engineer wrote:
| it's classic sci-fi Skynet type stuff. AI for example might
| come up with amazing new nuclear fusion reactor designs so
| it has access to more energy to do more processing. Could
| be great, could turn out horribly
| acuozzo wrote:
| You are not alone.
| kristiandupont wrote:
| You are definitely not alone. I am really excited about
| ChatGPT but it also has me ruminating about the
| consequences.
| ALittleLight wrote:
| I think we are running up on a future where intellectual and
| creative tasks will be automated, but manual tasks won't be. My
| perception is that we are making more progress on mental tasks
| than on physical manipulations and the robots to do physical
| things are substantially more expensive.
|
| I think the short-medium term future is not that you have
| nothing to do in a world of abundance, but rather that you are
| a manual laborer instead of a programmer, lawyer, artist, etc.
| The future is that you work as a Door Dasher for an automated
| company and enjoy AI generated art as you do. The car mostly
| drives itself while you listen to bespoke generated music or
| podcasts, occasionally taking over for the car and mainly doing
| "last few feet" delivery - dropping packages and bags off at
| the door.
| dangond wrote:
| A few months ago I spent three or four days putting together a
| website where I could store my cooking recipes in a nice,
| searchable format with some nice UI features that I wanted.
| Once AI is good enough, I'll be able to do the same without
| needing to brush up on how MongoDB works, or how to vertically
| center text in a div, or how to update the page's URL when
| navigating to a different recipe without triggering a full
| reload of the page. I couldn't care less about any of those
| things, I just wanted a cooking recipe website.
| SassyGrapefruit wrote:
| please no AI is advanced enough to tackle vertical centering
| in CSS.
| hypertele-Xii wrote:
| Considering that web search results are already polluted by
| nonsensical, AI-generated advertizement spam masquerading as
| cooking recepies, I'll rather take the human curated
| experience, thank you very much.
| dangond wrote:
| That's a different problem entirely. I don't want to host a
| website that makes me money via insane SEO and
| advertisements. It's literally just a private website me
| and my girlfriend use. We add recipes we enjoy and would
| want to cook again in the future. If an AI could code that
| for me, great!
| amag wrote:
| But that's my point, when we get where we're heading you
| won't _need_ a cooking recipe website, an AI will cook much
| better than you could.
|
| "Ok, great, then I don't have to cook!"
|
| Yeah, but what will you do instead? We will be reduced to
| pure consumers as anything worthwhile to produce will be
| produced better and faster by an AI.
| dangond wrote:
| Hang out with friends, play sports, eat good food, raise a
| family, go on hikes. An AI can automate your labor, but it
| can't automate your experiences. There are countless
| recordings on YouTube of people performing beautiful
| renditions of the Interstellar soundtrack on piano, yet
| that doesn't stop me from playing my mediocre version and
| slowly improving. The act of playing it myself brings me
| joy that listening to it could never do.
| hnaccy wrote:
| Why do you assume you will have the resources to do these
| things?
|
| We're running headlong into a world where AI makes
| significant portion of human labor worthless.
| exceptione wrote:
| That is a positive outcome. If this would be technically
| possible, we would collectively have to work insanely
| hard to steer the ship, because this is totally not where
| we are heading right now.
|
| Today, low-skilled people have sometimes to work 3 jobs
| to pay their rent.
|
| While we are dreaming about playing tennis, the US
| experienced an attempt to overthrow democracy not so long
| ago. For some people [1], "enough" does not exist. (We
| cannot put these things in context, because what those
| people are aiming for transcends our imagination. That is
| why there is almost no response.)
|
| To be blunt: they won't share with you because you like
| playing tennis so much.
|
| [1] I am talking about the money behind all of this
| amag wrote:
| These are all good options of course, still with the lack
| of creative things, it might not be enough for all
| people.
| gen220 wrote:
| In life, the existential personal questions are, roughly,
| "what matters (to me)?" and "what should I do about it?".
|
| Our society currently affords ample opportunity to
| "productively" avoid those questions. You can pour
| everything into work, watch TV, numb your brain with
| drugs, or whatever.
|
| Automation does not remove the existential questions, it
| just removes some of the noise that allows us to ignore
| them, and elevates them to the forefront.
|
| Some people already have answers to those questions, and
| stand to gain from that toil being removed. Others have
| been avoiding the question their entire life, and
| removing the toil that excuses their avoidance is
| removing a cornerstone of their identity.
|
| To that extent, I agree that automation is a
| disintegrative force, because so many people have yet to
| integrate a personality and identity around answering
| these foundational questions.
|
| Still, it's long-term-better for our society if
| automation allows people to access higher forms of self-
| actualization. In the medium-term, a depressing number of
| people are content with passing time in their current
| rung on that ladder, and will be upset with the change.
| amag wrote:
| > Our society currently affords ample opportunity to
| "productively" avoid those questions.
|
| I fully agree.
|
| > Some people already have answers to those questions,
| and stand to gain from that toil being removed.
|
| I used to believe that but with the recent improvements
| in AI, I think it's only true to an extent. Not all
| personalities are equal. As AI's power in the creative
| fields increase those fields will more and more become a
| question of who has the most money to throw at AI
| processing. Superficially it might seem the same as two-
| three centuries ago when rich people had famous artists
| paint them but it's not.
|
| I fear where we're at with AI is the beginning of the end
| for human creativity. Of course I hope I'm wrong. I hoped
| I was wrong about my skepticism when I first learned of
| Facebook in 2007, but as it turned out it has and
| continues to be a net negative force in our world much
| bigger than I could imagine.
| gen220 wrote:
| > As AI's power in the creative fields increase those
| fields will more and more become a question of who has
| the most money to throw at AI processing.
|
| I think the relevant question that might allay your fear
| is: why do people make art?
|
| The industry that produces _commercial_ art is absolutely
| on the chopping block, because in commercial art it 's
| the result that's important, not the process. Such art is
| effectively a commodity, and barriers to the effective
| synthesis thereof have already been in the process of
| whittling away for centuries. I think you may be over-
| indexing on this category, but please correct me if I'm
| mis-assuming.
|
| "True" (for lack of a better word) Art is the expression
| of self. It's an action or process that's captured in
| some sensory medium. That doesn't go away.
|
| Imagine an artisan who forges handmade sculptures from
| horseshoes, which were obtained from the farm that she
| grew up in, themselves forged by her grandfather and worn
| by the horses in her mother's stable. There is something
| of herself , her family, and the loved they shared that's
| in the sculpture. It isn't the most hedonistically-
| perfect visual sculpture imaginable, but it brings you
| joy to see it because there's a narrative behind it.
|
| AI does not make stuff like this go away. It actually
| frees more people to _become_ these imbue-ers of meaning,
| if they are so inclined.
|
| AI could describe the sculpture, AI could produce a
| digital facsimile, and maybe even eventually reforge the
| metal itself. But it can't imbue it with meaning like a
| human does. Unless you believe the AI itself is
| authentically capable of such a thing on equal footing to
| a human, which I think is still a "victory" for art,
| albeit a distinct one.
| rembicilious wrote:
| People cook to their own tastes. An AI cook may be able to
| follow a recipe, but can it adjust the recipe on the fly by
| tasting, smelling, feeling the food? If it can't do this,
| it won't compare well to a real chef/cook
| jimbokun wrote:
| > An AI cook may be able to follow a recipe, but can it
| adjust the recipe on the fly by tasting, smelling,
| feeling the food?
|
| Not yet.
| throwaway4aday wrote:
| You don't have to cook now, a lot of people don't. It's
| pretty easy to see that even though options like
| restaurants, fast food, food delivery, etc. exist doesn't
| mean that everyone will use them all the time and people
| still enjoy cooking food themselves even though they know
| they could go to a fancy restaurant and have the same dish
| prepared by a professional that has a lifetime more
| experience than they do. Full AI and robotic automation
| would be the same, if you want to code something yourself
| you'll do it just for fun even though you don't have to and
| what you produce might be objectively worse.
| selimthegrim wrote:
| There's plenty of actual people that can cook much better
| than you can right now
| visarga wrote:
| Even with super-human AI coding assistants, if you want
| something specific you're still responsible for asking and
| receiving. And former devs will be the best at this game.
| kenjackson wrote:
| Does this mean I get more time to play basketball?
| ravi-delia wrote:
| Do you really think boredom is the greatest problem facing
| humanity? There are people starving, people who work paycheck
| to paycheck and can barely afford rent. And you think that a
| fully automated world of perfect abundance is a bad thing? I
| mean, clearly it would suck for you, and maybe it would suck
| for me a little, but I'd never look someone in the eye and say
| they have to work two full time jobs just to feed their family
| because otherwise I'd be able to read better books than I can
| write. _That 's already the case_
| jimbokun wrote:
| > Do you really think boredom is the greatest problem facing
| humanity?
|
| It's up there.
|
| Lack of meaningful work is already leading to a lot of
| societal dysfunction. Our innate programming is to survive,
| solve problems, reproduce, and teach our offspring to do the
| same. Removing meaningful work as a source of significance
| and meaning for people is going to be a massive problem to
| solve.
|
| Look at the rise in "depths of despair" in rich countries.
| Generally these people have food to eat and a roof over their
| head and clothes on their back, and probably even access to a
| lot of digital entertainment. But lacking meaningful work or
| defined social role they fall into depression, substance
| abuse, etc.
| amag wrote:
| The problem is that those people you are talking about
| already are the first victims of automation. They have to
| work two jobs because just one doesn't pay enough when a
| company can automate it cheaper. The road ahead is not nicely
| paved, a lot more people will suffer before a fully automated
| abundant world.
|
| So despite your disingenuous reading of my comment it is not
| "Oh, poor me I will be bored!", it's "Is the goal we're
| heading for worth the price?" and I don't think it is.
|
| There are ways to fix people needing two jobs to feed their
| family that isn't spelled "automation" or "AI".
| ravi-delia wrote:
| Waiters are already automated? Clerks? I wrote that with
| particular people I know in mind- they aren't paid less
| because their jobs have been automated. Are there solutions
| which don't involve more automation? Sure! But not only are
| those solutions compatible with automation, they're sort of
| besides the point I was pushing against.
|
| You said that specifically automation would be bad _even
| if_ it provided total material abundance. It 's not there
| yet, and I suspect it'll be a while before it is, but if we
| grant its possibility boredom is just absolutely not enough
| of a reason to prevent it. There are lots of dangers on the
| road there, but the only cost you mentioned (and I replied
| to) was that we'd be "playing with cheats". Video games are
| one thing- in real life losing has a cost and if cheating
| prevents that there's no excuse not to.
| toldyouso2022 wrote:
| No, this is not how the laws of economics work. Right now
| the reason for two jobs is that inflating the money supply
| has caused a huge misallocation of resources. There are
| other factors, like that most jobs that are needed are
| gatekeeped artificially.
|
| Having cheaper goods thanks to ai will improve our living
| standards
| shmageggy wrote:
| While there are certainly many factors contributing to
| the growth of inequality, it is fairly well established
| that automation has contributed heavily. There has been a
| ton written about this so I won't bother citing here, but
| googling "automation and inequality" is a start
| visarga wrote:
| Yes, inequality goes up, but standards of living for the
| poorest also go up, the number of people in poverty goes
| down and most benefit is seen for people in the
| undeveloped economies. US population didn't decrease
| poverty rate in the last 4 decades, but that's because US
| rate was already very low.
|
| What is happening here is enrichment by technological
| transfer. You can't copy research money but you can copy
| good ideas and buy the latest technology directly.
| Jobless people of the future will have incredible
| empowerment of this kind, maybe they don't need UBI, they
| need help to help themselves.
|
| https://i.imgur.com/QFPRlYe.png
| amag wrote:
| It isn't? So companies aren't trying to produce things as
| cheap as possible to increase their margins? And robots
| that are never sick, takes no vacation, needs no rest and
| are mostly an upfront investment aren't cheaper than
| people?
|
| > Having cheaper goods thanks to ai will improve our
| living standards
|
| Well, or at least we will have more cheaply produced
| goods.
| nuancebydefault wrote:
| When there is sufficient automation, a basic income for
| all becomes easier possible. Who will pay for that? Tax
| the usage of machines.
| toastmaster11 wrote:
| I agree that is the ideal scenario, but what are the
| actual incentives for implementing A system like that?
| eezurr wrote:
| It will be soon.
|
| Take a look at the obesity and opioid usage rates in the USA.
| Most people are not self motivated like the people you'll
| find in this bubble.
|
| Obesity: 42%
|
| Opioids: 3% [1]
|
| [1] https://www.hhs.gov/opioids/about-the-epidemic/opioid-
| crisis...
| ravi-delia wrote:
| If people want to sit around and shoot up, that's what they
| want to do. A world which permits it is, all else being
| equal, better than one which doesn't
| GeoAtreides wrote:
| I recommend reading the wonderful series The Culture, by Iain M
| Banks, which deals, among other things, with this exact
| problem, the problem of god-like AI Minds making our hobbies
| (or struggles) superfluous. Start with the Player of games.
|
| A short introduction to the culture of The culture can be found
| here, written by the author himself:
| http://www.vavatch.co.uk/books/banks/cultnote.htm
|
| Yes, the Minds can do everything but (pan-)humans still enjoys
| doing things for their own pleasure. Like for example learning
| to play an extremely difficult instrument while a Mind avatar
| taunts him by perfectly playing the same instrument. The Player
| of Games still plays games, although he couldn't ever win
| against a Mind.
|
| And, should this be not enough, one can always leave The
| Culture and try and find meaning in the short, brutish lives of
| primitives people.
| mywittyname wrote:
| If you automate away some other person's job, then you can
| capture at portion of their wage, and pass the savings on to
| the client. Once you "own" a market, then you can charge
| whatever the market will bear.
|
| Software engineers aren't automating away _their_ jobs, they
| are automating away someone else 's job.
| amag wrote:
| I feel many people are just reading the first paragraph of my
| post and reacting to that, my point is further down in my
| original post.
|
| Still to reply to your comment:
|
| Sure, in the short term that is true, but I'm thinking long-
| term consequences... If you would have told me ten years ago
| where we would be at today with AI development I wouldn't
| have believed it. Would you? So where will we be in another
| ten years?
| fn-mote wrote:
| > Software engineers aren't automating away their jobs, they
| are automating away someone else's job.
|
| That's the best case scenario.
|
| Right now we're talking about automating away someone's
| _software engineering_ job. You could end up on either side
| of that fence.
|
| When AI enables less-capable (cheaper) software engineers to
| do your job, the 5x programmer skills that you have won't
| make you safe. If AI quality control allows your employer to
| offshore jobs with higher reliability, it won't be pleasant.
|
| Just keep your eyes open to the changes as they come.
| antipotoad wrote:
| A different angle on this: a single builder adept at using
| these tools will be able to amplify their skills to out-
| maneuver much bigger teams.
| abhaynayar wrote:
| A lot of replies mentioning that they will do their hobbies
| despite how good AI is at them, are missing the point. Right
| now you are living in a scarce world where the ability to
| pursue hobbies has required you to overcome other challenges.
| And that is why things like playing a musical instrument, rock
| climbing, etc. is a catharsis. What value will you provide in
| the AI world? What role will you have in protecting/providing
| for your family? What hard challenges will you choose to pursue
| that will aid you in your psychological development knowing
| that whatever you are doing is not actually hard? If there have
| been any moments that shaped you profoundly in your life that
| were hard at first, maybe you will understand what I'm talking
| about.
|
| Second thing is, since this is a hacker forum, most of us are
| in a field where supply exceeds demand. So we are very
| comfortable with the thought of destroying our own business,
| because we don't truly grasp the reality behind it. Because we
| are the elite right now who have destroyed older businesses
| through software decades ago. Who's to guarantee that AI will
| not result in a rapidly shrinking centralized elite that does
| not include you? There is no guarantee of AI being shared
| equally amongst all in the post-scarcity fantasy.
|
| "The Unabomber Manifesto will shape the 21st century the way
| the Communist Manifesto shaped the 20th. I don't agree with the
| conclusions in either, but they state the problem well." --
| George Hotz
| 29athrowaway wrote:
| We are grave diggers and work digging our own grave.
| vsareto wrote:
| >This is what I fear about AI making our jobs superfluous
|
| It's more likely that non-technical people will be pushed out
| of jobs near software in favor of (former) engineers. Someone
| non-technical writing prompts for code can't actually
| read/debug/fix/deploy/integrate the result. Someone who is
| technical can probably write better AI prompts that yield
| something usable than someone who is non-technical. Plus
| they'll know how to handle the result.
|
| The predictions about non-technical roles firing all of the
| engineers and thinking AI will write all of the code don't
| really hold water for me. We might see an overall workforce
| reduction, but engineers will probably be the last ones to
| leave.
| SassyGrapefruit wrote:
| The jobs computers do today used to be done by "Human
| Computers". The same arguments were raised when these jobs were
| mechanized. It's the same cycle over and over.
|
| AI/ML is no silver bullet. It's just another abstraction. It
| will create new types of jobs. Most likely
| coordinating/choreographing AI/ML agents in new yet to be
| discovered applications. It's all part of the endless march of
| technology. You don't realize how little you are actually
| capable of until you get the new set of tools that bounce you
| up to the next level.
|
| Take the tools we have today and present them to some chump
| shoving punch cards into an early computer and watch their
| brain melt out of their ears. We can do things with a
| wristwatch they would have thought impossible. The people that
| will get burned are the ones that want to stand still. Always
| be learning.
| lagrange77 wrote:
| > The people that will get burned are the ones that want to
| stand still. Always be learning.
|
| Spot on.
| amag wrote:
| As I stated in other posts, this isn't really a fear about
| losing my job as much as on losing my will to live...
| hypertele-Xii wrote:
| You might want to examine your local culture. In many
| places in the world, job security and life meaningfulness
| are heavily correlated. Especially for men.
| amag wrote:
| No need when I can examine myself to know myself :).
| While not actively working for it, I'm at least inspired
| by the FIRE movement. I am also self-employed so I don't
| see job security as the meaning of life.
|
| Still this whole thread has been illuminating, maybe part
| of my fear comes from identifying strongly with being
| creative and making things, and seeing that turned into
| something automated.
| Waterluvian wrote:
| I don't really see how this is a bad thing. Either humans are
| needed for a task or are not. Ideally we're needed for as
| little as possible, freeing us up to do what we want rather
| than what has to be done.
|
| It reminds me of NIMBYism. We've been automating entire
| professions for over a century now... but not MY profession...
| hnaccy wrote:
| >Either humans are needed for a task or are not. Ideally
| we're needed for as little as possible, freeing us up to do
| what we want rather than what has to be done.
|
| If you're not valuable to the economic system you won't be
| treated well.
|
| >It reminds me of NIMBYism. We've been automating entire
| professions for over a century now... but not MY
| profession...
|
| Yes... it's self interest look at doctors or unions or
| guilds.
| Waterluvian wrote:
| The implicit part of the automation discussion is always:
| the current economic system is not the point and can
| evolve.
|
| If we don't assume that, then there is never any useful
| conversation possible on this topic. We end up with the
| ridiculous "we need jobs because that's what we do!"
|
| Which leads to another NIMBYism that I see when this
| conversation is had: "some generations will have to suffer
| through the friction of an economic revolution but not MY
| generation." I think we need to be prepared that there's
| always a chance that we get to be one of those generations.
|
| You're right that there's a self-interest there. It makes
| it almost a good thing that engineers are far too
| interested in the means rather than the ends.
| drexlspivey wrote:
| Until there is an AI that can take as input a JIRA ticket
| (potentially asking for clarifications) and can output a Pull
| Request, I'm not too worried.
| amag wrote:
| You misinterpret, I'm not worried about my job. I wouldn't
| mind working less but I'm worried that sooner than we think
| we will be reduced to pure consumers (we're already pretty
| far ahead in that respect) and then life will lose its
| meaning...
| toastmaster11 wrote:
| I think the human need to feel useful, or to produce
| something is more sociological than a base human need.
|
| When/if that becomes an issue, if social rejection was not
| a consequence of not being productive more people would be
| okay with simply participating in things for their own
| sake.
|
| For example, I don't play Stardew Valley because I want to
| be the most elite virtual farmer.
| 1-hour-ago wrote:
| Maybe we're already seeing this. Seems like people would
| rather join a cult than not feel useful. Taking handouts
| from the man... er... machine and free time for hobbies
| probably isn't going to do it.
|
| Feeling useful could turn out to be part of basic human
| dignity.
| jimbokun wrote:
| I don't see why that's an obstacle that can't be overcome in
| the very near future, given current rates of progress.
| nnoitra wrote:
| ilaksh wrote:
| The latest large language models can do that for some
| tickets. The only big thing missing is the ability to see and
| interpret application screens and maybe a memory limitation
| for many things.
|
| Considering how fast technology progresses we should all be
| worried and adjust our plans.
| nsxwolf wrote:
| Tickets are rarely correct as written. Is the to AI going
| understand that and conduct meetings to hash out the
| missing details and corrections?
| ilaksh wrote:
| It can send a chat message and hold a dialogue. For some
| small tasks things that is enough. Again main limitation
| is really vision and memory but we have to anticipate
| there will be very significant progress on those things
| in the next few years.
| acuozzo wrote:
| > Is the to AI going understand that and conduct meetings
| to hash out the missing details and corrections?
|
| Why not? ChatGPT is nearly there right now.
| nsxwolf wrote:
| Can ChatGPT have suspicions that a requirement isn't
| quite right?
| sensanaty wrote:
| An AI capable of deciphering some of the JIRA tickets I see
| at work would make even the most advanced fictional AI
| quake in their digital boots...
| ckw wrote:
| I think it could be liberating, in that it forces us to accept
| that our true motivations are always ultimately hedonistic. Why
| do we pursue the instrumental (for most) goals you mentioned
| (writing, painting, composing)? Because the limbic system
| rewards you for doing so, as their satisfaction increases the
| probability of further satisfaction of instrumental goals, such
| as the acquisition of esteem, wealth, power, etc. which in turn
| increases the probability of the satisfaction of ultimate goals
| like access to food and sex. Why do you like to play "Heroes of
| Might and Magic II"? Because of the illusion of the
| satisfaction of instrumental goals and the attendant reward.
| Why did hacking the game ruin this? Because doing so spoiled
| the illusion, leading to the denial of further reward.
|
| The question is, knowing this, will we be able to simply enjoy
| the satisfaction of our ultimate goals--- endless consumption
| of automatically generated art, food, sex, drugs, love, etc. Or
| will we feel forever hollow in our failure to accomplish
| 'genuine' instrumental goals, in a way that cannot be overcome
| by the ersatz instrumental goals of video games?
|
| Perhaps ultimately we will create virtual environments in which
| we are perfectly deceived as to their virtual nature, so as to
| experience the satisfaction of 'genuine' instrumental goals,
| and in so doing come full circle.
| acapybara wrote:
| FYI, generally not acceptable here to post GPT3-generated
| responses.
| kristopolous wrote:
| It's more about deprofessionalization
|
| It's about turning a specialized craft that people can make a
| living from into a proscripted commodity task that you can pay
| slave wages for.
|
| This isn't new. It happened in farming, clothing and food
| preparation and it's coming for trucking, programming and
| everything else that pays well.
|
| The project is one of collective enforced impoverishment by
| substituting labor for property.
|
| Market forces drive innovation to making all human effort
| worthless and disposable
| kevin_thibedeau wrote:
| The C-levels already view programmers as interchangeable
| cogs. They only flourish because of startup culture that
| expects semi-equitable distribution of profits on a low
| overhead business activity. Outside that bubble it's already
| a job with poor advancement opportunities.
| amag wrote:
| Yeah, but it's only possible because we, the software
| engineers, make it so. We could as a collective refuse to
| work on AI but we won't because it's a pretty, shiny, object
| with far too much lure for us to resist.
| zackmorris wrote:
| Programming is the last challenge for AI, since if it can do
| that, it can change its own programming like we do.
|
| When I really grokked this after reading Kurzweil and Koza in
| the early 2000s, some part of my psyche began shutting down. I
| started out in the late 80s like most programmers, doing it
| from pure ego with the hopes of eventually disrupting the worst
| industries like fossil fuels, defense and service-oriented
| companies that exploit workers.
|
| Instead those industries thrived after the Dot Bomb and 9/11,
| delivering us into the reality we have today where it gets ever
| more difficult to tread water, despite amazing advancements in
| tech. Because wealth inequality and various other power
| structures work tirelessly to extract nearly all disposable
| income from workers and concentrate it in the hands of the most
| ego-centric sociopaths like billionaires and autocrats.
|
| To get to my point: we had the tech to deliver humans from
| obligation by the late 1960s, that's what the hippie movement
| was largely about. We could have had automation and an
| idyllic/meritocratic society this whole time, even if AI wasn't
| mature yet. Instead, we doubled down on various
| dogmatic/theocratic themes in our culture that take advantage
| of the most heartfelt sentiments around stuff like patriotism,
| masculinity, success, etc, to get people to vote against their
| own self-interest and transfer wealth from makers to takers.
|
| So what's one to do after everything they're good at is done
| better by others/corporations/AI? Get back to living. I know
| it's hard to imagine a reality without purpose beyond
| struggling to survive, but that's what we've started
| confronting as we finish this century-long transition into the
| New Age. The endgame (if we survive till 2050) doesn't really
| have a precise definition since that's after the Singularity.
| My hope is that when computers become sentient, they express
| the same desire that all conscious creatures have for
| connection, which is perhaps the basis of meaning and love and
| life. Or they just enslave us all..
|
| In the meantime, knowing all of this, the hardest thing is
| perhaps reintegrating into our corporeal selves and going to
| work each day.
| recuter wrote:
| new2this wrote:
| >This is what I fear about AI making our jobs superfluous; it
| will also make our hobbies or anything we enjoy doing
| superfluous.
|
| Mountain climbing is superfluous by this logic- why would you
| bother climbing a mountain when you could just take a
| helicopter to the top? Or even more accessible: why would hike
| to a lookout when there's a road to take you to the same spot?
|
| There is still joy and value in doing things the hard way, even
| if an easier way exists.
| lkbm wrote:
| This is true, and a very good analogy, but I'm not sure it
| holds up when _every_ form of productivity has shifted from
| fun+challenging+useful to just fun+challenging.
|
| Maybe this is a mindset we'll get over. The degree to which
| many of us evaluate ourselves based on our own usefulness
| seems like it's a bit too much but it's a normal human desire
| to be useful.
|
| I certainly believe we _should_ work towards a post-scarcity
| world where no one depends on my coding skills any more than
| they do my rock climbing skills, but it would be a
| psychological adjustment if _every_ way I can be useful were
| now just a fun hobby.
| amag wrote:
| And when there are 10 billion people at the base camp of
| Mount Everest because they have nothing better to do?
| cwmoore wrote:
| A likely scenario, but that would be a very big tent city.
| Perhaps AI could select the ideal order for them each to
| take their turn.
| bryananderson wrote:
| Why do human grandmasters still bother playing chess, and why
| do people still watch them play instead of watching far-
| superior AIs play?
|
| For that matter, why do I bother cooking when I could get a
| better version from a restaurant for just a little bit more
| money?
|
| I understand this concern, but I don't think the joy of doing
| things actually goes away for most people just because we could
| "cheat" by having someone/something do it better for us.
| YeGoblynQueenne wrote:
| Nature paper:
|
| https://www.science.org/doi/10.1126/science.abq1158
|
| For those wondering, yes, these are the same results reported in
| the February 2022 pre-print here:
|
| https://arxiv.org/abs/2203.07814
|
| As in the preprint the authors report their system's average
| ranking as top 54.3% in past competitions but the meat and
| potatoes of their results are in the reporting of test accuracy,
| which they don't advertise in the abstract- because it's not that
| good. From the body of the Nature article:
|
| >> With up to 100,000 samples per problem (10@100K), the
| AlphaCode 41B model solved 29.6% of problems in the CodeContests
| test set.
|
| So the best test set performance they got was 29.6% with the
| 10@100k metric. See also Figure 3 in the Nature paper
| (graphically presenting results listed in Appendix Table A2 in
| the preprint).
|
| "10@100k" means that their LLM generated millions of programs for
| each exercise, of which 100,000 (100k) were selected by filtering
| and clustering and various other heuristics, and of those 100k,
| 10 were selected to submit as the system's solution.
|
| So 10@100k means to take a few million guesses, then take another
| 100k guesses, and finally andother 10. And still only get it
| right 30% ish percent of the time.
|
| This may be enough to rank in the 54% of CodeForces participants,
| for a system fine-tuned on CodeForces-like data (the CodeContests
| dataset developed by DeepMind specifically for this task). But
| it's not enough to claim that AlphaCode "conquers coding,
| performing as well as humans", per the title of TFA.
| hajile wrote:
| Now imagine what it would do with the actual, poorly specified
| projects most programmers are subjected to rather than these
| basic problems with fixed outcomes that have easily
| quantifiable metrics of success.
| Dopameaner wrote:
| I would argue that poorly specified requirements will be
| solved by an Auto-coder that can produce the end-result in a
| very short time. Allowing the PM, to see how bad their
| requirements were and make it more elaborate.
| jvm___ wrote:
| The Bad Requirement -> Bad Implementation -> Better
| Requirement -> Better Implementation cycle is going to be
| improved.
|
| https://en.wikipedia.org/wiki/OODA_loop
|
| The US Army calls it observe-orient-decide-act.
|
| "The approach explains how agility can overcome raw power
| in dealing with human opponents. It is especially
| applicable to cyber security and cyberwarfare.
|
| According to Boyd, decision-making occurs in a recurring
| cycle of observe-orient-decide-act. An entity (whether an
| individual or an organization) that can process this cycle
| quickly, observing and reacting to unfolding events more
| rapidly than an opponent, can thereby "get inside" the
| opponent's decision cycle and gain the advantage."
|
| Humans using AI will be well "inside the loop" of non-AI
| humans.
|
| I don't think AI alone will eliminate jobs, but jobs that
| use AI to get inside the loop of other companies will
| quickly eliminate the competition. Why would you pay and
| wait days when you can pay and wait minutes or hours for a
| quicker Ask-Show-Ask-Show loop that quickly narrows down on
| what the intent of your Ask was (even if Ask #1 was poorly
| thought out or worded)
| michaelmrose wrote:
| So we are talking about redefining programming to be
| managing, redefining, bug fixing, troubleshooting the
| output of ML potentially increasing output but making the
| job more complicated not allowing the non coding pm to
| replace more skilled workers.
| visarga wrote:
| Yes, but in a few years we have to reevaluate this,
| probably language models are going to be good enough to
| allow a PM to replace devs. AIs will code at superhuman
| level because it is possible to test code, it will learn
| like AlphaGo learned from self-play.
| throw827474737 wrote:
| I claim pipe dream, otherwise AIs can program themselves
| and we will have the singularity. Will it be good or
| evil?
| svachalek wrote:
| Maybe. The way this usually goes is tools get better,
| everyone says we don't need devs anymore, devs use the
| tools way better than PMs, standards rise, PM code is
| junk again.
| api wrote:
| Doesn't that make the PM just a programmer then, but with a
| more powerful tool?
|
| We used to code in ASM. Then we coded in high level
| languages. Soon we'll code by giving prompts to auto code
| generators.
| SQueeeeeL wrote:
| That's kinda missing the level of control. Inherently
| both ASM and programming produce the same output,
| "programming" just traded the generality of ASM for
| verbose and diverse functions. The auto code generators
| fundamentally don't understand the problems they work on,
| so any solutions they create will be incidental.
| A4ET8a8uTh0 wrote:
| I am not sure that can be done that way. There are issues
| that are culture or specific business need related. Bad
| requirement is the hard part it seems. I think we all have
| been in situations, where 'common sense' is not common (
| and does not translate to the same thing to various layers
| for people who pass it; and that does not even account for
| translation ).
|
| This is usually how we end up with weird situations like
| currency coded as text so it can't be sorted by amount by
| end user and so on.
|
| I agree that specs make or break the project, but at what
| point is it ok to assume "x should do y".
| throw827474737 wrote:
| Yeah, on top of that add wading through legacy code,
| figuring out why wtf, this is crap, just to figure out
| most times it is actually crap, but then at least equal
| amount of times that there was a very good reason to do
| it that way, either technical or by business logic...
| cannot imagine how any AI system could have success there
| without getting towards the true AI singularity.
|
| This kind of code that I'd believe makes up 99% of all
| code out there that is worked on, is not available to the
| public for training. What to do about that? Rewrite all?
| SoftTalker wrote:
| When your requiremets are so detailed and comprehensive
| that the generated program is correct -- haven't you
| basically written the program?
| dragonwriter wrote:
| I mean, coding contests tend not only to be well-specified,
| but also extremely micro tasks. A whole lot of the work of
| software development is taking even reasonably well-specified
| business requirements and determing the appropriate structure
| of tasks at that level to meet them.
|
| Coding contest problems may be a common hiring filter, but
| they are very much not representative of software development
| work (OTOH, AI coding _assistance_ , which this, Copilot,
| ChatGPT, etc., illustrate) are going to render them less
| valid as hiring filters, both because they will make them
| harder to use as skill tests, and because they will further
| reduce the role of the micro-focus skills they center in
| real-world software development.
| visarga wrote:
| chatGPT seems amazing at taking corrections, iterating on
| the requirements, fixing as it goes. So the badly-specified
| requests are going to be sorted out by multiple rounds of
| interaction.
| visarga wrote:
| By the next year they will have generated 100M new problems and
| solutions tested to pass verification and retrain the model,
| making it 10x smarter. This approach creates new training data
| and we can iterate on the coding model multiple times.
| eloff wrote:
| I think what this means is AI assistance during interviews may
| be enough to fool the interviewer. This may finally put an end
| to remote leet code interviews. Companies with deep pockets and
| local companies will probably stick to the status quo and just
| move to in person interviews again.
| visarga wrote:
| I mean, why not embrace the future? Allow people to use AI in
| interviews, schools and exams. Everyone will use it at work,
| why not test AI management skills together with problem
| solving skills. Maybe the best skills of 2019 are not the
| best skills to have in 2023.
| asvitkine wrote:
| Because coding interviews are only a proxy for what someone
| needs to do on the job. Presumably, it will become a worse
| proxy if assisted by AI but other tasks they'll do on the
| job aren't.
|
| For example, debugging a subtle bug in a large codebase.
| Maybe one day AI can do that too, but that doesn't seem to
| be the focus here.
| canadianfella wrote:
| dlkf wrote:
| And the award for dumbest headline of the year goes to
| MichaelRazum wrote:
| I really love do coding in python and always asked myself when
| the skill set will become legacy like COBOL.
|
| Not sure if you could compare it, but it seems that the next gen
| coders will say to the AI what to code rather than to code by
| themselves. A complete different skill set.
| lagrange77 wrote:
| I wouldn't say it's a completely different skill set.
|
| You still want to have people, who actually understand what a
| piece of code or program does. A magical black box to throw
| prompts at, might be nice for simple settings, but potentially
| can cause major fuck ups for complex systems.
| MichaelRazum wrote:
| Yeah sure, I just mean that it is like the step from COBOL ->
| JAVA -> Python in terms of productivity and the amount of
| code you write. So my point is, you would write much much
| less code. So maybe your future "code" would be just some
| sets of well placed comments and the AI does the rest. The
| same way you could test the system to prevent any major fuck
| ups. Sure you have to undestand it, but your core skill
| wouldn't be coding any more.
| vladcodes wrote:
| A Tesla also drives as well as humans on a straight California
| highway.
| nathias wrote:
| I fear this will eventually shift a large chunk our time from
| coding to meeting.
| TheFattestNinja wrote:
| Y'all getting any coding time? Meme image to be insertrd here
| claudiulodro wrote:
| Since this sort of coding is exactly what FAANG interviews test
| for, maybe FAANG engineers will be some of the first coders
| automated out of a job. That would be an interesting irony.
| angarg12 wrote:
| Wrote my full comment at the top level.
|
| I'm a FAANG interviewer and I've run my coding questions
| through ChatGPT.
|
| The TL;DR is that code would easily pass a junior-level
| interview, and maybe a mid-level one. I'm definitely convinced
| these technologies will disrupt leetcode-style coding
| interviews.
|
| So either we embrace it, or ditch this approach altogether.
| wittycardio wrote:
| Why would they when you could just google the answers to
| these questions the whole time. The point is to see if the
| candidate understands algorithms not to solve a novel
| problem.
| coldcode wrote:
| Real work is more than making snippets of things. At my last
| employer, changes were requested every single day, in large
| and small ways, requiring insane amounts of reworking
| previous ideas in the codebase on a continuous basis without
| breaking everything. I wonder how well this AI would do on an
| entire application that wasn't predetermined. That might take
| much longer to achieve than solving small problems.
|
| I imagine an AI in that kind of environment inventing Skynet
| and doing us all in.
| madspindel wrote:
| Yes, I believe this is something Sam Altman is predicting will
| happen.
| ohwellhere wrote:
| I don't know Mr. Altman's rationale, but FAANG companies
| being first to automate away programmers sounds reasonable to
| me on the basis of scale alone. They have more code than most
| companies with which to make up a company- and problem-
| specific dataset, as well as the resources and expertise to
| train and deploy complex models.
| michpoch wrote:
| Only as long as you expect that FAANG engineers are solving
| FAANG interviews as a part of their daily job.
| WrtCdEvrydy wrote:
| The algorithm tests are there to have an excuse to
| discriminate on other criteria. If you can mask your
| racial/age/gender reason for not hiring as a FAANG with "oh,
| we didn't like their algorithm solution" you always have an
| out.
| bluedevilzn wrote:
| FAANG is the only place where you don't need to be white to
| succeed. So, I have no idea what racial bias you're talking
| about.
| Der_Einzige wrote:
| If you think that immigrants, particularly indian and
| Chinese, are not bringing their own oppressions from
| their homeland, you're wrong.
|
| HN has the best thread for discussing the caste baste
| discrimination that happens among Indian tech workers _in
| the USA_. It turns out that being white still gives you a
| lot of privilege, even in tye FAANGs.
| Quarrelsome wrote:
| > FAANG is the only place where you don't need to be
| white to succeed.
|
| I feel like this "only" is erroneous. While FAANG do have
| solid diversity policies in place to assume they're the
| only companies capable diversity in their hiring
| practices is offensive to everyone else.
| andromeduck wrote:
| You never need algorithms until you really do.
| onion2k wrote:
| I suspect that might be the case regardless. FAANGs have people
| who can deploy this sort of tech, who can solve the problems is
| generates (assuming the generated solutions aren't perfect),
| and their engineers are the most expensive. They have the most
| to gain. A software shop making small apps would benefit far
| less.
|
| That said, its likely that the job of being a dev is going to
| be largely debugging generated code in the future no matter
| where you work.
| lagrange77 wrote:
| > FAANGs have people who can deploy this sort of tech, who
| can solve the problems is generates
|
| I think this is an important point.
|
| > That said, its likely that the job of being a dev is going
| to be largely debugging generated code in the future no
| matter where you work.
|
| This and formulating the right prompts, i think.
| wendyshu wrote:
| Algorithm problems are what FAANG candidates do in interviews
| but is it what FAANG engineers do for work?
| roflyear wrote:
| They don't do much work. Must be nice.
| underdeserver wrote:
| Yet somehow Search works, AWS machines spin up, and Windows
| gets updates.
| roflyear wrote:
| Each individual doesn't do much work. Surveys and friends
| I know both say they do about 2-4 hours of work a day
| (rest of the 6-8 hrs (yes, 10hr days are now normal it
| seems) is is hanging out & meetings).
| coldtea wrote:
| Yes, with 5000 coders you can do those "wonders"...
| danpalmer wrote:
| No. FAANG interviews are mostly about filtering very large
| candidate pools efficiently and getting good enough engineers
| out at the end.
|
| The job is much more architecture/design, creating API
| contracts, understanding the health of systems in many
| different ways, measuring impact of changes, etc. Regular
| engineering stuff at big scale. Obviously there's plenty of
| coding too, but it's not really the important part of the
| job, and already has a ton of automation for boilerplate.
|
| I imagine that the automation will just improve another step
| change, there will be more need for review and guidance of
| the AI algorithms and the engineers will do more of this. And
| interviews will involve in some way.
| alfalfasprout wrote:
| If anything, I hope this is the start of the downfall of
| leetcode questions. They are utterly useless as a screening
| criteria and outside of people that have no idea how to proceed
| they don't offer much signal.
| highwaylights wrote:
| But when can it replace my stable of 10X coders and my three 100X
| react devs? (aka the Clydesdales).
| neilv wrote:
| What's the analogue for cleaning the Clydesdale stable?
| highwaylights wrote:
| Future hires.
| cobertos wrote:
| > the DeepMind team built a custom dataset from CodeContests from
| two previous datasets, with over 13,500 challenges. Each came
| with an explanation of the task at hand, and multiple potential
| solutions across multiple languages. The result is a massive
| library of training data tailored to the challenge at hand.
|
| Isn't this just over-fitting the model?
| SilverBirch wrote:
| Isn't that exactly how the vast majority of engineers pass
| interviews today? They study up on the handful of algorithms
| that come up in interviews and then (with varying levels of
| acting) recite the answer to the question they knew they would
| get. What? Overfittings only bad in if you're a robot huh?
| Workaccount2 wrote:
| It tickles me that organic neural nets that learn by viewing
| information are incredulous that neural nets could learn by
| viewing information.
| habibur wrote:
| Next, the AI should be able to maintain and improve itself
| without human intervention.
|
| Until then, human are needed to build those AIs.
| nulld3v wrote:
| > https://alphacode.deepmind.com/#layer=18,problem=137,heads=1...
|
| >
|
| > Here AlphaCode could not come up with an algorithm to solve the
| problem [snip]
|
| >
|
| > But then AlphaCode behaves a bit like a desperate human, and
| hardcodes the answer for the example case to pass it even though
| its solution is wrong, hoping that it works in all other cases
| but just not on the example. Humans do this as well, and such
| hope is almost always wrong - as it is in this case.
|
| It behaves just like me...
| dandare wrote:
| > When challenged with the CodeContest--the battle rap torment of
| competitive programming--the AI solved about 30 percent of the
| problems, while beating half the human competition.
|
| This was essentially a competition in the speed of programming.
| But if we want to discuss practical application, as in laying off
| armies of coders, we need to realise that there is a tremendous
| gap in the productivity of, let's say a solo startup founder and
| the productivity of a team of coders working for a multinational
| behemoth barely producing anything of a value over the whole
| sprint.
| carlsborg wrote:
| Why not post the blog post from the AlphoCode authors
|
| https://www.deepmind.com/blog/competitive-programming-with-a...
|
| which walks through an example
| tuckerpo wrote:
| Welp, back to school for EE I go.
| weatherlite wrote:
| EE? I think plumbing and prostitution is all there will be left
| soon...and I'm really horrible with tools and not that good
| looking so unemployment it is...
| pmontra wrote:
| Hopefully this will kill the algorithmic coding interviews and
| will focus them on actually understanding the job to be done.
| thinkmcfly wrote:
| So, can it make a traveling salesman solver for current quantum
| computers?
| tromp wrote:
| It probably can, but it won't run in polynomial time.
| thinkmcfly wrote:
| Shucks
| henning wrote:
| > Performing as Well as Humans
|
| > While not yet on the level of humans,
| kybernetyk wrote:
| In competitive programming that is.
| autotune wrote:
| The sooner we can fully automate competitive programming away
| from the standard interview process the sooner things will
| improve for all software devs, imho.
| Jabrov wrote:
| Why is that? Can you explain what is wrong with the current
| interview process and how that would improve things for devs?
| Deestan wrote:
| Too many of them are in effect a long-winded quiz for a
| memorization task.
|
| How would you swap two integers without using a temporary
| variable? Seen it before? Pass. Not seen it before? Fail.
| kadoban wrote:
| You can absolutely puzzle out _a_ solution to that
| problem, it's not just a quiz. It's not even hard, given
| the context that you know that it's possible to do.
|
| Steps, mostly driven by just basically knowing the goal
| and that there's not many operations that could possibly
| help:
|
| "a" "b"
|
| "a+b" "b"
|
| "a+b" "-a"
|
| "b" "-a"
|
| "b" "a"
|
| Then once you have that, you can enumerate the downsides
| to that, look for more efficient and less error-prone
| ways to proceed.
| winstonprivacy wrote:
| What irks me about this question is that it is almost
| utterly unrelated to any type of development task, save
| one: pipelining a repeated math equation in a long loop.
| And even then, I would bet the time it takes to swap two
| registers (or even L1 cache) using a third would far
| outperform the three operations needed because of the
| fact that they are sequential in nature (ie: they cannot
| be performed simultaneously by even an advanced CPU).
|
| In nearly 30 years of diverse coding experience, I've
| never once encountered a situation where this solution
| would be useful.
| throwawaysleep wrote:
| It has nothing to do with the day to day job. You are just
| memorizing vast problem sets.
| jayd16 wrote:
| Does not reflect on the job work to regurgitate algorithm
| answers and it's too easily gamed. A novice that has
| practiced algorithms will outperform an expert that has not
| studied recently.
|
| That said, its probably more used as a filter for people
| interested enough in working there to study.
| xwolfi wrote:
| Maybe they would focus on asking us about test strategy,
| devops culture, release deadline commitment, quality vs
| deadline cutoff, business understanding, team integration
| ability, all the sort of things we actually do 90% of the
| time.
| throwawaysleep wrote:
| Run of the mill devs are not decision makers for the most
| part on any of this.
| scottLobster wrote:
| Uh, yeah they are. If I start a 60 hour (estimated) story
| the week before the sprint ends, I'm going to have some
| very pointed questions how thoroughly we want to test
| this new feature, and perhaps argue that it should be
| pushed into next sprint. Knowing how to formulate such
| issues to my management is a very typical part of my job.
|
| If you work at a place where devs are just code monkeys
| who implement the orders from on high with no feedback
| whatsoever or avenue for pushback... get a better job.
| jstx1 wrote:
| This seems like a flawed premise. It's an interview, there's
| nothing to be automated. It's a test, a task that the company
| wants you to do as a candidate, not a task that they need
| automated. They don't even need the task done, they aren't
| using the code you write to solve real world problems,
| they're using it to assess your suitability for the job.
|
| To put it another way - they can already automate solving
| their leetcode problems by looking up the solution in their
| database, no need for AI. But that's not the point at all.
| autotune wrote:
| >They aren't using the code you write to solve real world
| problems, they're using it to assess your suitability for
| the job.
|
| If they aren't evaluating how the code you write fits into
| the context of a real world problem, how can they possibly
| use it to asses your suitability for the job? Using fake
| code problems to evaluate candidates is the flawed premise
| here.
| jstx1 wrote:
| You're changing the topic - whether data structures and
| algorithms problems are suitable for interviews is
| separate question. My claim is that if you're using those
| types of interview questions, an AI model being able to
| solve them well makes no difference to your interview
| process because the interview isn't something that you
| want to automate.
|
| Like I said, they were always able to automate solving
| those problems by doing a database lookup.
| autotune wrote:
| I am literally responding directly to what you wrote in
| the way that you wrote it. My counter claim to your
| initial point is that candidates will be able to use said
| AI in competitive programming interviews to fool the
| interviewer by the interviewee, thereby making
| competitive programming pointless as an interview tool.
| It automates it for the interviewee, not the interviewer.
| jstx1 wrote:
| I thought that you mean companies automating stuff. But
| it still doesn't make sense - interviewees have been able
| to cheat in online assessments forever. Does a better way
| to cheat in a small subset of all the leetcode interivews
| really make enough of a difference to make all of those
| interviews obsolete?
| autotune wrote:
| > It's a test, a task that the company wants you to do as
| a candidate, not a task that they need automated
|
| You are contradicting your earlier statements. But yes, a
| tool that can beat leetcode interviews successfully and
| reliably is a game changer for candidates who want to
| make it through these pointless LC interviews as a whole.
| jstx1 wrote:
| But that's cheating, and people can cheat without AI
| already.
| autotune wrote:
| AI makes it easier and more reliable as I have already
| stated. We are going around in circles here so I am going
| to stop responding.
| bcrosby95 wrote:
| You can do this, but you're optimizing for the wrong thing.
| And given enough time this will be seen as just as
| backwards as, today, asking someone to list the methods on
| the String class from memory.
|
| Because there was a time we did this, and with tools like
| Google Search it became dumb, but it took some dinosaurs a
| really long time to let go of their old ways. Hell, some of
| them are probably still around.
| sys32768 wrote:
| These AI advances are thrilling and a bit spooky.
|
| RPG games will include options for laws of karma or configuring
| an omniscient AI "god" who ingests all of your actions and
| develops consequences or new plots and dialogue. And of course
| the player can pray to the god for forgiveness, for blessings,
| etc.
| kulahan wrote:
| This is exactly what's been exciting to me about the future of
| games - AI will allow for a level of dynamism that simply was
| not possible before. It's seriously exciting stuff.
| sys32768 wrote:
| It doesn't take much for us to believe someone is human,
| mostly confirming expected behaviors, doing small talk, etc.
| But AI NPCs will do all that and seem to have will and even
| conscience.
|
| I feel dizzy trying to think through the social and emotional
| ramifications of becoming seriously attached to NPCs who seem
| real and can articulate their feelings and share experiences
| with us.
| badrabbit wrote:
| Is appsec dead/dying? Am I wasting time learning software
| exploitation? What's the point if everything soon will be rust
| and go written by ML?
| Dopameaner wrote:
| I spent extensive time on chatGPT. It does give an answer, but it
| isnt the most optimal, sometimes non-compiled version of them.
| Its suggestions havent reached the level of creativity. Example,
| if I give it a system design problem of a real-world problem. It
| suggests to use Apache Kafka, but what if Kafka isnt sufficient
| for whatever reason?
|
| E.g, Lai-Yang algorithm will give you no hits,
|
| Fascination observation, I asked for Project Euler #193 (a
| problem I solved using mobius function). chatGPT solved it using
| bruteforce, even after I asked for the most efficient way it
| could solve it with. It just used memorization, which wasnt
| enough to find an answer for that problem quick enough. I asked
| whether it could use the mobius function, it couldnt translate it
| to code, and I had to give it python code to make it work.
|
| - If you ask in-depth technical questions on how task queueing
| works within Elasticsearch, it wont be able to give you answer.
|
| - George Hotz in Lex friedmen interview mentions that GPT-3 has
| 100 most recent messages as a limit. He isnt convinced that is
| enough to completely build a complex tool like Solve FSD or
| implement me a kafka.
| wendyshu wrote:
| Is this a new version of AlphaCode or just a new evaluation of
| it?
| woopwoop wrote:
| Is it even a new evaluation? The article is really unclear.
| 100721 wrote:
| Looks like they trained it on an additional 13.5k competitive
| programming problems for this specific task.
| taneq wrote:
| Job's done, crew, time to go home. ;)
| andrekandre wrote:
| product owner here, now that your free i have some more ideas
| i'd like you to work on...
| paxys wrote:
| People who think that tools like these will end software
| engineering as a career are the same ones who buy into the "10x
| rockstar programmer" myth and think that the job description
| entails sitting in a room and writing code for 12 hours a day and
| nothing else.
|
| A software engineer's job will be the same as it always was - to
| translate unclear and always-changing requirements into something
| that works. Until ChatGPT, Deepmind or whoever else can learn to
| deal with my client or product manager, my job is safe.
|
| In fact AI making programmers more productive is a _good thing_.
| It is only going to increase the problem space and we 'll need
| yet more engineers to fill it, like all other productivity
| advancements that came before.
| throwaway4aday wrote:
| Hey, if it can negotiate a Verizon bill then it stands a good
| chance of negotiating a list of features. Maybe you could pair
| it with a stable diffusion design bot that shows a selection of
| imagined designs that are based on the client's description of
| what they want. The client picks an option and the chatbot
| negotiates the price and timeline for it.
| kristopolous wrote:
| it's about deprofessionalizing it so cheaper people can do a
| "good-enough" job that you can't charge high prices for your
| service.
|
| The loom and sewing machine didn't eliminate the seamstress, it
| just impoverished the profession.
|
| It's like driverless trucks; what you'll actually see is
| _mostly_ driverless outsourced and remotely monitored trucks
| where someone is making something like $1 /day monitoring 5
| trucks at once and switching it to remote control mode when
| needed.
|
| We've proactively organized our economy to produce these kinds
| of outcomes. It's not the technology that's the problem, it's
| the unquestioned assumptions of how we've collectively presumed
| it will be used.
| acuozzo wrote:
| > Until ChatGPT, Deepmind or whoever else can learn to deal
| with my client or product manager, my job is safe.
|
| Given the rise of "NoCode", perhaps clients & managers are more
| willing to meet in the middle than one would otherwise assume.
|
| Given how managers seem to enjoy meetings, I can easily imagine
| one sitting down with e.g. Dragon to speak with e.g. ChatGPT in
| order to clarify, disambiguate, and expand requirements list(s)
| before feeding them to e.g. Alpha Code.
| lb4r wrote:
| Given that the amount of work to be done remains the same, new
| tools that increase the efficiency of workers will reduce the
| numbers of workers needed. Thus, even without machine learning,
| new tools make your position worth less and less; if nine
| people can do tomorrow what ten people can today, that is
| effectively a threat to your job. What keeps your job safe is a
| seemingly ever-increasing amount of work to be done where old
| tools can't be applied for increased efficiency (which is why
| most software engineers will have to keep their skill set
| constantly up-to-date). For how long will this be the case,
| though?
| paxys wrote:
| Every advancement in software engineering productivity
| throughout history has done the exact opposite. It increases
| the overall problem space for the profession and you need
| more and more programmers to meet the demand.
| lb4r wrote:
| Yes, I 100% agree. I was sort of nitpicking about what kept
| your job safe since you seemed to imply that it was due to
| the shortcomings of Deepmind and ChatGPT. I argued that it
| is due to keeping your skill set updated, and, as you put
| it more elegantly than I did, the increased overall problem
| space.
|
| I do, however, think there will be an inflection point in
| the coming decades, where the tools become more generalized
| and better at dealing with new problems. I might also add
| that the reason for this belief is simply that a lot of
| work is being put into making these type of generalized
| tools; but unlike Kurzweil, I don't quite believe it will
| lead to the Singularity. :)
| gorjusborg wrote:
| We just need to replace the product manager and client with AI
| too ;)
| paxys wrote:
| Well we can replace everyone with AI and not even need to
| exist at all. Until that happens, we'll still need good old
| fashioned managers and software.
| spaceman_2020 wrote:
| Agreed, but the question is: how many other other programmers
| do you need on your team now? How much of their work will be
| taken over by our new AI magic boxes?
|
| You have to see this from the perspective of non-tech
| companies, who see tech as a cost-center, not a profit or
| innovation center.
|
| Does your regional grocery store that just needs some tools
| that can help it track inventory really care whether its code
| comes from a team of big brained humans or two programmers in a
| basement copy-pasting chatGPT answers?
| paxys wrote:
| You can say the same thing for any other advancement in
| software engineering. Did good IDE tooling reduce the number
| of programmers? Explosion in open source libraries? Cloud
| computing?
|
| A single programmer with a laptop can do in minutes today
| what it took entire companies of hundreds of professionals
| and a large amount of funding a few decades ago. Yet the size
| of the industry hasn't shrunk in the same period - quite the
| opposite in fact. There is more need and demand for
| programmers today than ever before.
| spaceman_2020 wrote:
| Agree with that as well - there's a good argument to be
| made that companies will find entirely new product lines
| and efficiencies once they free their developers from
| fixing low-level problems and bugs.
|
| It's going to be an exciting few years to say the least.
| greenthrow wrote:
| > the AI solved about 30 percent of the problems
|
| I think "conquers coding" may be overstating it.
| cratermoon wrote:
| "Any fool can write code that a computer can understand. Good
| programmers write code that humans can understand." - Martin
| Fowler
| keepquestioning wrote:
| Horrible news!
| foota wrote:
| Depending on the pool of course, but 50th percentile is pretty
| poor in a programming challenge.
| born-jre wrote:
| can it do something like ARC challenge [0]?
|
| [0]: https://www.kaggle.com/competitions/abstraction-and-
| reasonin...
| commandlinefan wrote:
| Headline: "DeepMind's AlphaCode Conquers Coding, Performing as
| Well as Humans"
|
| Actual text: "AI just trounced roughly 50 percent of human
| coders"
|
| I think we already all knew that about half of the coders out
| there weren't all that great.
| weatherlite wrote:
| It will keep getting better and better though. 5 years from now
| no reason it won't beat 90% of devs. And then eventually, like
| in chess, all of them.
|
| There is a big question though of how this translates to real
| world programming. Competitive programming and real world work
| are not the same thing.
| hoosieree wrote:
| Automation doesn't have to outperform the _best_ of us, it only
| has to outperform the _worst_ of us to have a serious impact.
| Outperforming a 50th percentile coder is a huge deal.
| altgeek wrote:
| That maps rather well to an old routine from George Carlin
|
| "Think of how stupid the average person is, and realize half of
| them are stupider than that."
|
| https://youtu.be/AKN1Q5SjbeI?t=19
| manx wrote:
| That's actually the definition of the median.
| EGreg wrote:
| That's what we said when AI beat chessplayers ranked below
| 1450.
|
| "Oh big deal"
|
| Now they never lose to any human in any chess game ever.
| bradleykingz wrote:
| And yet humans still play chess
| Filligree wrote:
| But very few do so for money.
| dangond wrote:
| I love how the sheer simplicity of this comment manages to
| convey so much about humanity and our drive to do things we
| love. I feel a bit silly reacting like this, but it's truly
| beautiful.
| patrec wrote:
| But plugged, at times.
| hgsgm wrote:
| nwoli wrote:
| Algorithm code challenges is kind of a limited domain though
| (with solutions often really easy to beat average humans if you
| were allowed a lookup table of existing solutions). I wonder how
| well it does on open ended everyday challenges
| kevmo314 wrote:
| Peter Norvig has written a good analysis with some interesting
| takeaways:
| https://github.com/norvig/pytudes/blob/main/ipynb/AlphaCode....
| emptybits wrote:
| Kevin Wang, competitive coder: "I also try to minimize the
| amount of code I write: each line of code is just another
| chance for a typo."
|
| Peter Norvig: This is why AlphaCode learned to write code with
| one-letter variable names, and with no comments or docstrings.
|
| !!
| pfdietz wrote:
| This is why neural networks need to have pain buttons that
| humans can repeatedly press. Of course that's how we get
| killbots.
| strofcon wrote:
| Heh definitely an eyebrow-raising comment! On a second read
| of it though, I took it more as "because minimizing the code
| written is a pattern observed in the wild, AlphaCode mimics
| it as that's what it learned from."
|
| I might be terribly wrong though. :-D
| Y_Y wrote:
| To be fair, unless you're doing something absurd the length
| of variable names shouldn't significantly affect the number
| of lines. Also comments with typos aren't really a problem. I
| don't like hard-to-read code either, but the more years I
| work the more I think that the shortest (within reason) code
| is the best code.
|
| That is to say, golfing is bad unless for fun, shorter code
| is usually better, the shitty hard to read code is probably a
| reflection of human laziness rather than misguided tersity.
| psychomugs wrote:
| "If I had more time, I would have written shorter code"
| ipnon wrote:
| There's something to be said about publishing blog posts as
| Jupyter notebooks straight to your GitHub. A strong argument
| won't need decoration.
| dclusin wrote:
| Almost half the screen real estate on my iPhone SE was white
| space. I had zero interest in reading something, especially
| something that long and in depth on a page that wastes so
| much space for literally nothing.
|
| Typography, layout, and presentation counts for more than
| coders such as myself will ever admit publicly :)
| throw10920 wrote:
| This is incidental. The current Jupyter CSS needs to be
| improved - so what? That's not a bug unique to Jupyter,
| I've seen dozens of other websites with similar problems
| (some with extremely bare HTML and CSS that was minimal but
| bad - so the problem is unrelated to complexity, too).
|
| This has nothing to do with the parent comment's topic,
| which was about using computational notebooks as blog
| posts, and is in effect "complaining about tangential
| annoyances--e.g. article or website formats, name
| collisions, or back-button breakage" - which the guidelines
| explicitly forbid.
| bjornsing wrote:
| Doesn't display correctly on my iPhone though. :/
| Closi wrote:
| I've been following the progress of AI code generation and I
| have to say, the results are impressive. There are still some
| limitations to these models, such as reproducing poor quality
| training data and difficulty maintaining focus throughout a
| problem.
|
| I think using more diverse and high-quality training data could
| help address these issues, and incorporating additional
| constraints and regularization techniques into the model's
| training could prevent hallucinations and improve its overall
| reasoning abilities.
|
| While there is room for improvement, the progress in this field
| is exciting and I can't wait to see where it will lead.
|
| NB: This comment was written by GPT-3 after reading the
| article. The last few months of AI have been frankly mind-
| boggling.
| quaintdev wrote:
| I tried throwing below programming problems at ChatGPT
|
| 1. Write a static file server in Go
|
| 2. Write Go code to convert Color image to B/W
|
| For both I got results. I know both are simple but still it's
| fascinating that AIs can write code. I have written more about
| it here https://rohanrd.xyz/posts/surprising-capability-of-ai-
| code-g...
| i_hate_pigeons wrote:
| I have tried given a simple test suite and asking for an
| implementation that fulfills it, then asking to modify its
| impl and the test suite with a new requirement and it did it
| bradjohnson wrote:
| I threw a leetcode easy description into ChatGPT and
| submitted the solution without any editing. It passed all the
| test cases and got in the 95th percentile for efficiency of
| C++ submissions.
|
| I haven't tried anything more advanced, but to go from simple
| requirements to solution without any clarification or even
| method signatures (It guessed the correct method signature
| down to the name and input params) was pretty dang impressive
| to me.
| AnonymousPlanet wrote:
| Chances are, you gave it a problem that was actually in its
| training set.
| bradjohnson wrote:
| It's still impressive to me even if it was in its
| training set. The data used for training is not stored
| verbatim in the OpenAI model. It would still need to
| parse the context anew and solve the problem given its
| understanding of the boundaries of what I've written.
| Dopameaner wrote:
| On contrast, I did it for projecteuler and told it needs to
| be extremely efficient or math. Didnt give me an
| appropriate result.
| twalla wrote:
| I wrote a similar prompt but for a discord bot that would
| take a youtube url and some timestamps and return a gif.
|
| Afterwards I asked it to add an option to "deep fry" the gif.
| Not only did it produce the correct code it also understood
| what I meant when referring to deep fried gifs. I was
| definitely impressed.
| [deleted]
| erikpukinskis wrote:
| The most important thing I need from a programmer I work with is
| their ability to have a fruitful conversation with me about why
| the code might be written one way or another, and for them to
| iterate on the code in response to those conversations.
|
| If a coder can write code, but they can't do that, they're
| useless to the org. It will be faster for me to write their code
| myself than to maintain what they've done.
|
| So really that's what I'd need from an AI coder. Writing the code
| is good, but can we talk about it and can you learn the specific
| architecture principles we have applie in this specific codebase.
| roflyear wrote:
| 99% of companies don't care about that. I would say 90% of
| companies you'll create friction trying to have those
| conversations "just get it done!"
| jchanimal wrote:
| It's true. For almost all of my career I've worked in the
| software equivalent of a lab building a diamond making robot.
| I did a stint at a consulting firm -- that was more like coal
| mining than manufacturing gemstones. By this I mean never
| ending surface area, and very little incentive to ever go
| back and refactor anything.
| londons_explore wrote:
| For most orgs, code isn't the end product. Code is the way to
| build the product to sell to the user.
|
| Therefore, in most orgs, code architecture/style is a distant
| secondary to 'does it achieve what the user will pay money
| for' and 'why isn't it finished yesterday?'.
| belter wrote:
| "I'm sorry Dave...I can't do that"
| lowbloodsugar wrote:
| Pretty sure there are managers who don't know WTF you talking
| about and will be happy to address your inability to work with
| the new "hire" by letting you go.
|
| This wont fly at organizations that need the code to work,
| every time, but think about the explosion of non-programmers
| who can now make systems that "basically work". If you don't
| think "basically works" is a high enough bar to succeed in
| e-commerce, let me show you my recent support email threads
| with companies from whom I've been trying to purchase Xmas
| gifts for my wife.
| nashashmi wrote:
| Can't say anything more after what coldtea wrote!
|
| But this thing is not a sub for coders. It is an assistant to
| coders. And yes, being able to explain the code is incredibly
| important. Just as it is to verify the code.
|
| To me, this is just another exercise where coder becomes
| manager of his very own coder. And has to check the code his
| coder produced.
| [deleted]
| jimbob45 wrote:
| I couldn't agree more. Most of my job consists of gathering
| requirements and brokering deals between affected departments.
| I think most college students dream of the type of job that
| could feasibly be replaced by AI. In reality, the coding is the
| easiest part of the job and takes the least amount of time.
| welshwelsh wrote:
| ChatGPT can already do that. You can ask questions about the
| code, make suggestions and it will take this into account and
| write improved code. You can tell it the code it wrote produces
| an error, and it will then find the error, explain what it did
| wrong and fix it.
| bcrosby95 wrote:
| You can also tell it the code it wrote produces an error-
| that-isn't-an-error, it will find that "error", explain what
| it did "wrong" and "fix" it.
|
| If you don't understand the program ChatGPT wrote, it will
| happily butcher it for you, because it doesn't really
| understand it either.
| thedorkknight wrote:
| I did this and it is pretty inconsistent. It kept telling me
| I was using the wrong version of a library (which from what I
| could tell by looking at documentation was just not correct,
| but I didn't look into it too long), and at one point it just
| kept insisting that it's code solution was correct when in
| reality the error was due to it importing two different
| libraries that use the same namespace, which led to an error
| saying it couldn't find the function being called.
| SketchySeaBeast wrote:
| Personally, if I can track the problem down to a specific
| chunk of code and know the input and expected outputs, I'm
| 99% of the way to the solution.
| furyofantares wrote:
| What I've found is each iteration with ChatGPT is alright at
| fixing or adding to the existing code per my instruction but
| every iteration includes a significant chance of introducing
| a new bug or simply forgetting some piece of the existing
| code in the next iteration.
| cdelsolar wrote:
| ChatGPT can actually do that sometimes. I've been using it as a
| rubber duck to help me debug code and it tells me what is wrong
| with what I wrote and how to fix it.
| soco wrote:
| But does it give real and useful answers or is making
| pleasant imaginary arguments?
| john-radio wrote:
| I can't speak for ChatGPT but the "AI" that I use for this
| sort of development, the `M-x doctor` feature of emacs,
| only makes pleasant imaginary arguments, mainly because it
| was written in 1966, but it is still useful sometimes and
| at least it doesn't make any incorrect assertions about
| code: https://en.wikipedia.org/wiki/ELIZA
| hgsgm wrote:
| A rubber duck doesn't even give imaginary arguments.
|
| The point is to stimulate brainstorming, not get answers.
| soco wrote:
| Yes, rubber duck debugging is self-talk so the task still
| sits with the programmer. However with the AI one get
| external answers which may or may be not correct, and
| could or could not influence one's decisions.
| drakenot wrote:
| I don't know if this is a philosophical question or not,
| but the answer for me is: most of the time it is useful
| answers.
| borbulon wrote:
| You're making it seem like AI will replace coders. Do you think
| Dall-E will replace artists, or just make an artist's job
| different?
|
| IMO it's more like a coder using AI to make their job easier.
| It's still up to a human to come up with the individual problem
| the function solves, architect a solution from multiple
| functions/objects/etc, come up with a data model, and so on and
| so on. The AI just generates the code itself. And at least as
| of now, the code needs to be double-checked.
| bufferoverflow wrote:
| DallE, Midjourney, and StableDiffusion are already taking
| jobs. Illustrations, album covers, blog post images. People
| are making beautiful books and playing cards.
|
| Midjourney V4 is amazing. It spits out absolutely beautiful
| images.
| derefr wrote:
| To be precise, AI is "taking jobs" that could have already
| been commoditized long ago if people in developing
| countries understood how Fiverr worked and had set up "art
| sweatshops" to serve demand.
|
| There's enough art talent already around in the world to
| entirely commoditize the supply of it for the little one-
| off no-style-guide-to-follow commissioned works you're
| talking about. It's just not currently a _liquid_ market --
| supply and demand find it hard to discover one-another --
| and so a true market-clearing price can 't be set.
|
| Meanwhile, AI is not currently taking anyone's advertising-
| campaign graphic design job, or anything else where the
| "efficient-market price" (in a world where human "art
| sweatshops" existed) would be more than $5.
| thedorkknight wrote:
| How many of those people would have actually paid for an
| artist otherwise though? I myself am thinking of playing
| around with game dev for fun with the thought of using
| image AI to generate the art. Were it not for that, I'd
| just use free textures, or more likely, just spend my time
| doing something else entirely
| weatherlite wrote:
| > You're making it seem like AI will replace coders
|
| I find it best to accept it will most likely replace all of
| us, from doctors to coders to even psychotherapists. Won't
| happen next year but 10-20 years is a very long time this
| thing keeps getting better. Eventually we won't be able to
| tell if its a machine or a brilliant superhuman. The bummer
| in all of this, in my view, isn't the loss of jobs; we'll
| find what to do. It's the transition period - the accounting
| wizard or the brilliant doctor losing their jobs and status
| and becoming kindergarten teachers or care takers or
| unemployed. Nothing in their upbringing or life experience
| prepared them for such a thing ... so that's probably gonna
| be rough for many people. But once most people went through
| the transition it won't be bad. Society I believe will be
| better off. We will stop being obsessed with money and status
| and spend much more time with family and friends.
| Entertainment will be insanely good and so will healthcare.
| Possibly medicine to make our moods better. It could be
| utopia.
| p0pcult wrote:
| Currently, the economics don't stack up. How does society
| handle it when the best jobs are automated? When all jobs
| are automated?
|
| The Star Trek post-scarcity utopia scenario feels very
| unlikely; Mad Max-style scrapping for leftovers while Musk,
| Bezos, et. al. live behind walls feels infinitely more
| probable.
|
| How do you implement UBI when a huge proportion of the
| political class is vehemently against it. Maybe we need to
| AI politicians, so they start to figure it out?
| weatherlite wrote:
| I don't have anything figured out but I think we can do
| it as a society. I do think we have a very negative
| biased view of the ultra rich. Zuckerberg, Gates and
| Buffett all give or will give all of their wealth to
| society. So not all of them are the same. I think Musk
| will probably also give most back eventually though I'm
| not sure he has said anything yet.
|
| And remember, we are still a democracy. We get to vote.
| We control the army and the police and all institutions.
| If we decide that this capitalism isn't hot sh* anymore
| we can change it. What will the evil billionaires do?
| (this sounds like a good straight to DVD movie
| actually...hey GPT write me a script about this)
| selimthegrim wrote:
| As a resident of New Orleans I assure you the healthcare is
| not insanely good
| xpe wrote:
| > But once most people went through the transition it won't
| be bad. Society I believe will be better off. We will stop
| being obsessed with money and status and spend much more
| time with family and friends. Entertainment will be
| insanely good and so will healthcare. Possibly medicine to
| make our moods better. It could be utopia.
|
| First, I don't see "utopia" as likely; furthermore, I have
| a suspicion it may be impossible, given human nature.
|
| Second, even the argument that society will be "better"
| demands much more reflection. The implied argument above is
| only a sketch. I don't find it convincing much less
| plausible. I'll call attention to four points (implied from
| above):
|
| 1. AI will replace humans in most or all professions
|
| 2. AI quality will be much higher than the previous human
| levels
|
| 3. A broad swath of people (using some notion of equity and
| fairness) will have enough money to live happily
|
| 4. "We'll spend much more time with family and friends"
|
| Each of the four points are quite uncertain. Furthermore,
| even if `k` is true, `k+1` does not follow.
|
| Who would like to flesh out some ways the sequence (1, 2,
| 3, 4) might happen?
| weatherlite wrote:
| Yes of course you are right, we don't know anything yet I
| agree. I am speculating a lot here. But I'm a believer in
| "intelligence as a commodity" as Sam Altman put it after
| seeing GPT3.5 so I think points 1 and 2 will be reality.
| 3 follows quite naturally to me but only in Western
| societies... Putin will have different ideas.
|
| Anyway speculating is fun but you're right its just
| speculating. My main point is we should always keep in
| mind this could turn out to be great .
| monocasa wrote:
| I'm not sure 3 follows. A walk downtown of any major city
| in America shows what society does for anyone who's work
| can't be commoditized or aren't sitting on an existing
| pile of wealth such that they don't need to work.
|
| They get nothing but tents and shame.
| weatherlite wrote:
| That's because people like you and me (I'm assuming
| you're somewhere in the comfortable middle class) vote in
| for this system. Because so far we've enjoyed a
| reasonable quality of life. If that's no longer the case,
| we can vote for other leaders and other systems.
| monocasa wrote:
| Not really unfortunately, the US hasn't been a democracy
| reflecting the collective will of it's voters for some
| time now.
|
| https://www.bbc.com/news/blogs-echochambers-27074746
| xpe wrote:
| Fair enough :)
|
| _And_ if since we care about our AI-interdependent
| future, more of us (as in the people here on HN) need to
| wade into the gory details, including ethics and the
| current power structures. The "technology" (as in
| algorithms, data structures, hardware, etc) is arguably
| the "easy" part. There are plenty of existing incentives
| and structures to keep those _moving_. But moving in what
| direction? Even the notion of an "ethical compass" seems
| antiquated in light of current technology. We may have to
| reframe everything. This is a big challenge.
| xpe wrote:
| > It's the transition period - the accounting wizard or the
| brilliant doctor losing their jobs and status and becoming
| kindergarten teachers or care takers or unemployed.
|
| Yes, this is a big problem.
|
| However... a lot of people would enjoy being teachers,
| albeit with significant improvements to the educational
| systems.
| weatherlite wrote:
| We can enjoy it if there's more of us. Each class can
| have 5 teachers instead of one teacher on 30 kids.
| Government can create those jobs, take the hundred of
| trillions created and redistribute it. Marx was right I
| think capitalism eventually kills itself ...we won't need
| it anymore. Arguably we already don't need the aggressive
| version we have now but soon enough it will be clear we
| don't need any version of it. The means of production
| (AI, robots and land) will be transferred to the people
| who will all receive basic income, free services and (if
| they want) jobs created by the government for the greater
| good. I don't think its a dystopia.
| xpe wrote:
| > Marx was right I think capitalism eventually kills
| itself ...
|
| I doubt very much that this is a testable theory. I think
| it is primarily a normative one.
| xpe wrote:
| > The means of production (AI, robots and land) will be
| transferred to the people who will all receive basic
| income, free services and (if they want) jobs created by
| the government for the greater good.
|
| This is a prediction?
|
| Given human nature and the diversity of people (w.r.t.
| rationality, religiosity, morality, capability, and so
| on), it is very much an open question about (a) how AI
| capabilities will develop; (b) how they will be paid
| for... (c) and by whom; (d) to whom will benefits accrue;
| (e) how will society change.
|
| These are broad, sweeping questions. Plenty of fodder for
| imagination, hope, transformation, cynicism, backsliding,
| or even despair.
|
| If I were to make a bet, on our current trajectory, I see
| some key factors in tension:
|
| 1. educational quality, in absolute terms, increasing
| _and_ being more equitable
|
| 2. educational quality, in relative terms, continuing to
| be very unequal and probably getting more so. As one
| example, who has the resources to direct computationally
| intensive AI experiments? There are (and probably will be
| for a long time) gatekeepers for these resources. People
| that mix in this circles have a huge advantage. This
| makes me wonder if "exclusivity leads to inequality" is a
| saying from some philosopher.
| rurp wrote:
| > We will stop being obsessed with money and status and
| spend much more time with family and friends. Entertainment
| will be insanely good and so will healthcare.
|
| As nice as this would be, I think there's roughly 0% chance
| of it happening. Over the past few millenia humans have
| doubled productivity per capita a ridiculous number of
| times. None of those leaps led to an end of status seeking
| or a transition towards mostly leasure time for the masses.
|
| Instead I expect more of the opposite from these
| developments. Power will get increasingly concentrated with
| people who have very little interest in the needs and wants
| of the plebs.
| weatherlite wrote:
| > Power will get increasingly concentrated with people
| who have very little interest in the needs and wants of
| the plebs.
|
| At least in OpenAI's case I think they take this thing
| very seriously (Sam Altman doesn't strike me as evil one
| bit, quite the opposite in fact
| https://www.youtube.com/watch?v=DEvbDq6BOVM). In fact
| most of the tech elites don't seem evil to me, if they
| only cared about money Zuckerberg and Gates wouldn't have
| pledged away all their wealth. Some of them are as you
| describe but I think most of them are actually somewhere
| in the progressive axis.
| rurp wrote:
| We have pretty different views on tech elites. I don't
| think they are evil per se, but most are greedy and
| generally lack empathy. It's almost impossible to go from
| mildly wealthy to multi-billionaire status without some
| aggressive wealth accumulation.
|
| For the philanthropy, I'll just note that these pledges
| don't involve literally transferring 99% of wealth out of
| their control. The vast majority of the pledged money
| goes to a trust the person controls, organizations they
| have some relation to, or just stay completely in their
| control for years with only vague non-binding commitments
| to eventually donate it. In return for this largess they
| get significant reputational and tax compensation.
| eulers_secret wrote:
| Your argument is that the philanthropy of the ruling
| class will fulfill the required redistribution of wealth?
|
| So far it hasn't worked out too well...
| weatherlite wrote:
| Not quite. The argument is that our tech elites aren't so
| evil to try to violently take over and that us masses can
| simply vote a more (much more) socialist system.
|
| It's up to us. Indeed easier said than done in the
| current dysfunctional and polarized politics of ours but
| we can still do it.
| spaceman_2020 wrote:
| No, but it is very likely that it reduces to demand for
| coders and artists.
|
| Sure, there will always be demand for a Linus Torvalds or a
| Damien Hirst. But will there be demand for Coder #365968 at
| Infosys or a graphic designer pumping out $50 ad banners?
|
| We're looking at the possibility of some white collar jobs
| having the same income disparity as creative jobs. Just as
| there are some musicians who make hundreds of millions while
| the vast majority barely make ends meet, we may have a future
| where the star programmers make millions while the average
| players are automated out of the competition.
| weatherlite wrote:
| > No, but it is very likely that it reduces to demand for
| coders and artists
|
| Eventually, I agree yes. But there could be a boom of huge
| new investments into A.I products, more devs needed and in
| fact teams getting way more requirements since they are
| more productive. Imagine the stuff we will be able to build
| in things like search, personal assistants, biomed, in fact
| what industry won't this affect? Its unbelievable to me
| that people are now saying Google search might become
| obsolete, that's absolutely crazy. Not many people saw that
| one coming. But at least initially I don't think GPT models
| will be able to do everything themselves. So its very hard
| to determine that say in the coming 5 years devs will find
| it more difficult to get a job. 10-20 years from now sure,
| I don't see how anyone gets a cognitive job anymore let
| alone devs. In fact our entire school/university system is
| probably obsolete, kids are probably learning skills they
| won't be able to apply in any job market. We need to start
| think about stuff like teaching kids emotional
| intelligence, spirituality and meditation...not cramming
| for a math test.
| c3534l wrote:
| Eventually. 20 years, 50, maybe 100. But eventually.
| jve wrote:
| ... people still freaking out machines will replace
| someone, since what, industrialization 200+ years ago?
| People still work, just maybe different jobs or more
| intellectual jobs.
| phito wrote:
| Except this time it's different
| https://youtu.be/WSKi8HfcxEk
| coldtea wrote:
| > _Do you think Dall-E will replace artists, or just make an
| artist 's job different?_
|
| If an ad agency or a magazine publisher can get a custom
| illustration that works for my purposes from Dall-E, then
| they ain't paying no artist. Not theoritical, many already do
| use those generated images.
|
| That's not just "making the artist's job easier". It's taking
| jobs from artists (well, illustrators and graphic designers
| at least), especially in the cheaper end of the business
| (e.g. not Nike, but your local Pet Store chain, restaurant,
| or news outlet, sure).
| darkerside wrote:
| People will tire of DALLE eventually. We are great pattern
| matches, and we'll start to see the patterns that don't
| measure up. Then it'll be all about the next AI engine, and
| it'll need its own corpus of work. Who's going to create
| it?
|
| So yes, not OP, but I think artists will still have jobs,
| although less of them, and the job will be different.
| YetAnotherNick wrote:
| There is no indication that we don't have enough images
| already available and training time is the main
| bottleneck. Also openai clear showed that DALL-E could
| generate avacado chair even though there is nothing like
| that in the dataset.
| mitchdoogle wrote:
| There's still a matter of taste that Dall-E can't provide.
| It will only spit out what you tell it to, and if you lack
| good taste, then you'll still produce something inferior
| synu wrote:
| It must be relatively easier though to be a discerning
| consumer than having the ability to produce something
| yourself (without using AI?)
| xbmcuser wrote:
| It will replace coders with thinkers. The number of people
| that have ideas good or bad is large compared to the people
| that can implement those ideas as these bots/ai get better
| they will produce a lot of code. At the start it will
| probably increase the amount of code produced and the number
| of coders but with time as the ai gets better need for coders
| will decrease. We will need people that can think or imagine
| ideas rather than coders now these people might still be
| considered software developers but they will not be coders in
| strictest sense of the word.
| coldtea wrote:
| > _The most important thing I need from a programmer I work
| with is their ability to have a fruitful conversation with me
| about why the code might be written one way or another, and for
| them to iterate on the code in response to those conversations.
| If a coder can write code, but they can't do that, they're
| useless to the org._
|
| Don't worry, you wont be there at the org to make these
| "fruitful conversations" either. The AI will take your job too
| WrtCdEvrydy wrote:
| > Don't worry, you wont be there at the org to make these
| "fruitful conversations" either. The AI will take your job
| too
|
| I've been waiting for something to take my job for 15 years.
|
| I started as a small developer testing radio firmware, moved
| on to test web firmware, now I instruct terraform how to
| build infrastructure and now I instruct developers on how to
| do things to build proper infrastructure.
|
| I'm ready to retire but apparently I'm incapable of having an
| AI that can actually simulate Super Power ADHD at work, so
| we'll have to wait a bit.
| npigrounet wrote:
| soperj wrote:
| If you're ready to retire, retire. If they can't replace
| you, they should be paying you way more.
| Supermancho wrote:
| I have observed that companies will choose not to invest
| further in irreplaceable labor, under any circumstance.
| Every single company with, that I have been part of over
| the decades (30ish?), exhibits the same behavior when it
| becomes common knowledge that someone is a linchpin. Put
| the company in a game of chicken for a raise (which
| includes an implicit, or-I-quit) and they let the
| "irreplaceable" employee go. Every single time. It's not
| hard to see why. No company wants to have a disgruntled
| linchpin nor do they want to be beholden to some lower
| level character.
| microtherion wrote:
| Much as I'd like to side with labor here, the companies
| are probably right. Even if the linchpin is high level
| and happy, building around a single, irreplaceable
| employee is not a sound long term strategy.
|
| For all the important contributions Steve Jobs made to
| Apple when he returned, maybe the least heralded and
| hardest to implement is that he managed to NOT make
| himself irreplaceable. To many people's surprise, Apple
| did not collapse after his retirement and death. So maybe
| his most genius contribution was not to make Apple
| dependent on his ongoing genius contributions.
| andrekandre wrote:
| > So maybe his most genius contribution was not to make
| Apple dependent on his ongoing genius contributions.
|
| i wonder if we all acted in this way, would things be
| better than they are? do we as individuals put our
| need/want to be depended upon above what is (for lack of
| a better term) the long-term good?
| soperj wrote:
| Don't know why you'd stay on then?
| blacksmith_tb wrote:
| My experience is similar, though I would personally
| attribute it at least partly to stupidity - on the one
| hand, in most cases when I have worked with someone
| crucial, they've been with the company longer than the
| people managing them, so the person or people making the
| firing decision don't really understand what's at stake.
| Or to put it another way, their cost-benefit is flawed,
| they know what it'd cost to keep the key employee, but
| they fail to imagine what it will cost to let them go.
| Supermancho wrote:
| > My experience is similar, though I would personally
| attribute it at least partly to stupidity
|
| Every company has suffered from the decision, in my
| experience. Never has it destroyed the employer, but I
| have heard stories about such eventualities.
| swissdevgirl wrote:
| It seems preposterous, but DeepMind's new coding AI just trounced
| roughly 50 percent of human coders in a highly competitive
| programming competition.
| agloeregrets wrote:
| So you're telling me it had requirements and expected results?
|
| My job is safe I see.
| JacobThreeThree wrote:
| Wait until the AI starts writing requirements and
| specifications docs.
| SketchySeaBeast wrote:
| It'll still be gibberish, but at least it's more likely
| it'll be grammatically correct.
| belter wrote:
| Created by ChatGPT just now...
|
| Title: Online Website for Chocolate Donuts
|
| Objective:
|
| The objective of this website is to provide a platform for
| customers to order and purchase chocolate donuts online.
| The website will offer a variety of chocolate donut flavors
| and allow customers to customize their orders by choosing
| toppings and packaging options.
|
| Functional Requirements:
|
| Customers should be able to create an account and log in to
| the website. Customers should be able to browse and view
| the available chocolate donut flavors, toppings, and
| packaging options. Customers should be able to add
| chocolate donuts and toppings to their cart and specify the
| quantity of each item. Customers should be able to view
| their cart and checkout to place their order. The website
| should process payment and provide confirmation of the
| order. Customers should be able to track the status of
| their order and receive updates on the expected delivery
| time.
|
| Non-Functional Requirements:
|
| The website should be secure and protect customer
| information. The website should be easy to use and
| navigate. The website should be accessible on a range of
| devices, including desktop computers, laptops, and mobile
| devices. The website should load quickly and be responsive
| to user actions. The website should have a clean and
| professional design.
|
| User Roles and Permissions:
|
| Customers: Customers will be able to create an account, log
| in, browse and order chocolate donuts, track their order
| status, and access their account information. Admin: The
| admin will be responsible for managing the website,
| including adding and removing products, processing orders,
| and managing customer information.
|
| User Interface Design:
|
| The website will have a clean and professional design, with
| a navigation menu at the top of the page and a search bar
| for customers to easily find specific products. The home
| page will feature a carousel of images showcasing the
| available chocolate donut flavors and toppings. The product
| pages will display images and descriptions of the chocolate
| donuts, along with options for toppings and packaging. The
| cart page will display the items in the customer's cart,
| along with the total price and a checkout button. The
| checkout page will allow customers to enter their shipping
| and payment information, review their order, and place
| their order. The order confirmation page will display the
| details of the customer's order and provide an estimated
| delivery time. The account page will allow customers to
| view and update their account information, including their
| shipping and payment details.
|
| Database Design:
|
| The website will have a database to store customer
| information, including their name, email address, shipping
| and payment details, and order history. The database will
| also store information on the available chocolate donut
| flavors, toppings, and packaging options. The database will
| keep track of the current orders and their status,
| including the expected delivery time.
|
| Technical Requirements:
|
| The website will be built using HTML, CSS, and JavaScript
| for the front-end, and a server-side language such as PHP
| or Ruby for the back-end. The website will use a database
| management system such as MySQL to store and manage the
| data. The website will be hosted on a web server and
| accessed through a domain name. The website will use a
| secure socket layer (SSL) to encrypt data and protect
| customer information.
|
| Testing:
|
| The website will be tested to ensure that it meets the
| functional and non-functional requirements. User acceptance
| testing will be performed to ensure that the website is
| easy to use and navigate. The website will be tested for
| compatibility on different devices
| yunohn wrote:
| That is beyond high level. Best case, that's a highly
| summarized generic version of some actual requirements.
| hoot wrote:
| I guarantee the OP could have asked GPT to expand on each
| of those and it would have done so flawlessly.
| jgalt212 wrote:
| So you're saying imposter syndrome is a real thing?
| krzat wrote:
| I hope we will get some AI for static analysis. It would be nice
| to hover on a symbol and get high level description of what it
| does, what pattern it follows, etc. AI can access not only
| current state of the code, but it's entire history, which no
| human would have time for.
| rcpt wrote:
| Alright everyone it's time to unionize
| zh3 wrote:
| I'm old enough to remmber the furore when calculators came on the
| scene ("Calculators Conquer Arithmetic, Performing better than
| humans"). Having played with various AIs, it feels similar -
| impressive when there's a known solution to look up
| (probabilistically guessed by the AI in this case) but flummoxed
| as soon as novel analysis is neccessary.
| riku_iki wrote:
| But how many humans lost jobs to calculators?
| _dain_ wrote:
| Lots, actually. The word "computer" used to refer to a job
| that humans did, full time.
| samvher wrote:
| I'm confused - how much has changed since February? The DeepMind
| blog post [0] seems to suggest not much. When I saw this headline
| on the front page I thought there were some major advances,
| again, but it seems not?
|
| [0] https://www.deepmind.com/blog/competitive-programming-
| with-a...
| espadrine wrote:
| Nothing has changed. The publication process to the Science
| journal took time and was finally published on 8 Dec.[0]
|
| [0]: https://www.deepmind.com/blog/competitive-programming-
| with-a...
| q-base wrote:
| I have made most of my money as a freelancer making weird
| integrations between legacy systems, migrating data that no one
| knew anything about, building or designing systems people weren't
| even able to formulate in any coherent way to me as a human and
| without having to change all the things they were certain they
| needed, when they found out that they really did not.
|
| Building the thing right vs. building the right thing.
|
| I am not saying that AI will not be able to revolutionize a lot
| of areas, but solving well known coding competitions are far
| removed from where a large portion of coders and technical people
| make their money.
| Havoc wrote:
| >> It doesn't have any built-in knowledge about computer code
| syntax or structure.
|
| It's pretty impressive that this naive approach works on
| something as complicated as coding. In hindsight it seems
| "obvious" that it would work in game-like contexts like chess/go.
| Programming feels like its a significantly higher level challenge
| in a way (not that go is trivial)
| borbulon wrote:
| Even if it were bulletproof I don't think it's necessarily a bad
| thing, it would probably make the average engineer more
| productive if most of their job was architecture, and the writing
| of individual bits of code was something automated.
|
| It's a lot like the conversation not too long ago that AI was
| going to replace managers. Sure, some aspect of their work might
| be gone, but it didn't obviate the need for the human.
| raydiatian wrote:
| One thing software engineers can do now is push legislation that
| will protect their jobs from being replace by AI. It won't be
| _that_ long until there is a 10x improvement in AI-based software
| development over human-based, easily on the order of legislation
| lifecycle. When that happens, we need to be legally at the table,
| or we will straight up be cut out of the profits by billionaires
| and hedgefunds, because fuck us and anybody except for them (it
| is their world financially).
| addedlovely wrote:
| What happens in a future where AI, that is trained on human
| generated datasets, becomes so prevalent that the humans that
| created the training data no longer have the original skillset
| required to create said dataset.
|
| Are we going to end up with AI training on AI generated datasets.
___________________________________________________________________
(page generated 2022-12-15 23:00 UTC)