[HN Gopher] What codegen is good for
       ___________________________________________________________________
        
       What codegen is good for
        
       Author : tomduncalf
       Score  : 99 points
       Date   : 2023-09-28 16:01 UTC (7 hours ago)
        
 (HTM) web link (www.figma.com)
 (TXT) w3m dump (www.figma.com)
        
       | mgaunard wrote:
       | Nonsense article. A compiler generates code. It's essential to
       | any software development.
        
         | [deleted]
        
         | noelwelsh wrote:
         | Agreed. A lot of words and design effort to not say very much.
        
       | thomashop wrote:
       | _Instead of thinking about codegen as a replacement for a
       | developer, what if we thought about it as an extension?_
       | 
       | In my filter bubble most people would already agree with the
       | statement that generative code models are an extension rather
       | than a replacement. It's not that revelatory of a statement.
       | 
       | Saying they are a replacement without any evidence to suggest
       | this is already starting to happen seems like it's taking a
       | larger leap of faith.
        
         | lifeisstillgood wrote:
         | The "Story of Pixar" documentary gets about 30 minutes in and
         | apparently a Disney exec kicks out the nascent computer
         | animation department with the comment "if computers can't make
         | movies cheaper than animators we don't want them"
         | 
         | plus ca change
        
         | diogenes4 wrote:
         | This is also a theme with virtually every technology ever made
         | --technology works as leverage for human intention but rarely
         | replaces humans wholesale.
        
           | toddmorey wrote:
           | I think in most cases, yes. But sometimes it can be very
           | disruptive to the workforce, too. I think of what AI is going
           | to do to the commercial driver, factory / warehouse, and food
           | service industries, for example. That's a huge labor pool to
           | reallocate.
           | 
           | Not to mention ChatGTP is already has most of the skills
           | needed to be a reasonable replacement for most of my customer
           | service interactions. Not all, but somewhere around 70% I'd
           | say.
           | 
           | Not trying to be alarmist, but I feel we're going to have to
           | rethink labor in both large and small ways.
        
             | anonylizard wrote:
             | Its not going to do anything to the commercial driver,
             | factory/warehouse, and food service industries.
             | 
             | Those are physical labour based industries, and dexterity
             | robots are very, very expensive and not cost competitive
             | with humans.
             | 
             | Low end, phone/chat only customer service will be replaced
             | very soon. But what's next in line is millions of white
             | collar jobs, from medicine to law to accountancy to
             | teachers.
             | 
             | You are like 2 years behind, the 'truck drivers will be
             | replaced first' narrative has long been flipped on its
             | head.
        
       | narush wrote:
       | We've thought about this question a lot at Mito[1], where we're
       | building a spreadsheet that code-gens Python code for you as you
       | edit it. For us, it's been useful to decompose the question of
       | "what code-gen is good for" into a few sub-questions that help us
       | think about how generative AI approaches effect us:
       | 
       | 1. Why is it necessary to generate code in the first place? Can
       | you just skip to the "solution?" 2. Why is just writing the code
       | by the hand not the best solution? 3. So you do want to do code-
       | gen, does it make sense to do it in a chat interface, or can we
       | do better?
       | 
       | As a Figma user, I'd answer these in the following way:
       | 
       | > Why is it necessary to generate code in the first place?
       | 
       | Because mockups aren't your production website, and your
       | production website is written in code. But maybe this is just for
       | now?
       | 
       | I'm sure some high-up PM at Figma has this as their goal - mockup
       | the website in Figma, it generates the code for a website (you
       | don't see this code!), and then you can click deploy _so easily_.
       | Who wants to bet that hosting services like Vercel etc reach out
       | to Figma once a week to try and pitch them...
       | 
       | In the meantime, while we have websites that don't fit neatly
       | inside Figma constraints, while developers are easier to hire
       | than good designers (in my experience), while no-code tools are
       | continually thought of as limiting and a bad long-term solution
       | -- Figma code export is good.
       | 
       | > Why is just writing the code by the hand not the best solution?
       | 
       | For the majority of us full-stack devs who have written >0 CSS
       | but are less than masters, I'll leave this as self-evident.
       | 
       | > So you do want to do code-gen, does it make sense to do it in a
       | chat interface, or can we do better?
       | 
       | In the case of Figma, if they were a new startup with no existing
       | product and they were trying to "automation UI creation" -- v1 of
       | their interface probably would be a "describe your website" and
       | then we'll generate the code for it.
       | 
       | This would probably suck. What if you wanted to easily tweak the
       | output? What if you had trouble describing what you wanted, but
       | you could draw it (ok, OpenAI vision might help on this one)?
       | What if you had experience with existing design tools you could
       | use to augment the AI. A chat interface is not the best interface
       | for design work.
       | 
       | ChatGPT-style code-generation is like v0.1. Github Copilot is an
       | example of next step - it's not just a chat interface, it's
       | something a bit more integrated into an environment that make
       | sense in the context of the work you're doing. For design work, a
       | canvas (literally! [2]) like Figma is well-suited as an
       | environment for code-gen that can augment (and maybe one day
       | replace) the programmers working on frontend. For tabular data
       | work, we think a spreadsheet is the interface where users want to
       | be, and the interface it makes sense to bring code-gen to.
       | 
       | Any thoughts appreciated!
       | 
       | [1] https://trymito.io, https://github.com/mito-ds/mito [2]
       | https://www.figma.com/blog/building-a-professional-design-to...
        
       | jackconsidine wrote:
       | I thought these were some good ideas. In my experience I
       | vacillate between "omg the singularity is here" and "this
       | actually isn't that good for X specific task".
       | 
       | I very much trust the output of LLMs to be well-designed, but I
       | don't trust things to just work, especially if the system is
       | complicated. I experimented a bit the past few days doing a task
       | myself (building an interface in an existing project), using AI
       | assist, and trying to get AI to solve completely (GPT-4). The
       | solve completely pathway failed and I found myself in an
       | interminable loop. AI-assist was a solid experience.
       | 
       | Anecdotal but consistent with Figma's observation
        
         | tymscar wrote:
         | I have ran 5 "test cases" where I would jump on a video call
         | with some friends that are not software engineers but are very
         | technically savvy, and would give them a simple-ish task. They
         | are allowed to use ChatGPT as well as Google. None managed to
         | do it in 4 hours each, and that was with me giving them hints
         | when the AI would inevitably get into a loop. The task was to
         | install docker and using docker compose host a reverse proxy
         | with Traefik with self signed SSL, as well as web-server in
         | Rust. All the rust app had to do was read the kernel version of
         | the host machine and return it as html. I have then ran the
         | same exact test, but this time around, all you had to do was
         | install docker, and with docker compose run a Traefik reverse
         | proxy with self signed SSL, and two other containers,
         | UptimeKuma and Audiobookshelf. No dice, nobody managed to do
         | it.
        
           | jackconsidine wrote:
           | Wow kudos to you for running such a controlled and extensive
           | experiment
        
           | rolisz wrote:
           | Are they still friends with you? Are they still returning
           | your calls?
           | 
           | I've been coding for 15 years, but on my NAS I still use the
           | Synology reverse proxy so I don't have to deal with Traefik
           | and self signed SSL.
        
             | tymscar wrote:
             | Now that you mention it, they haven't really answered my
             | calls in weeks. Joking aside, we actually had a lot of fun
             | and they thought it was very eye opening, not only about
             | AI, but also about what my work life is like.
        
         | SomeCallMeTim wrote:
         | The output of LLMs is ... rarely well-designed. Well-documented
         | (with often incorrect documentation), well-formatted for sure,
         | but profoundly not well-designed, unless you're asking for
         | something so small that the design is trivial.
         | 
         | Even with GPT-4, if you ask it for anything interesting, it
         | often produces code that not only won't work, but that couldn't
         | possibly work without a major rewrite.
         | 
         | Not sure what you've been requesting if it's always been good
         | output. Even when asking GPT-4 for docs I've had it hallucinate
         | imaginary APIs and parameters more often than not.
         | 
         | Maybe the questions I ask are not as common? Given my
         | experiences, though, I wouldn't recommend it to anyone for fear
         | it gave them profoundly bad advice.
        
           | dkersten wrote:
           | I've come to the conclusion that GPT produces code at a level
           | of a new graduate _at best_. In actually getting it to solve
           | something more or less on its own, it did ok on simple tasks
           | and failed as soon as requirements became a bit more nuanced
           | or specific. It's also not very good at thinking out of the
           | box, it's solutions are all very clearly tied to its training
           | data, meaning it struggled doing anything that strayed too
           | far into the abstract or different.
           | 
           | However it's been great at being my rubber duck and it's been
           | great as a tool for helping me eg write complex SQL queries
           | -- never without me being a key part of the loop, but as a
           | tool to help me fill in gaps in my own skills or
           | understanding. That is, it amplified my abilities. It was
           | also pretty good at creating interesting metaphors for
           | existing concepts, explaining terminology and even explaining
           | bits of code I gave it.
        
           | nimithryn wrote:
           | My experience as well. Heavy GPT-4 use (for a variety of
           | things). Great for boilerplate, great for retrieving well-
           | known examples from documentation, saves a fair amount of
           | time typing and googling, but often completely wrong (majorly
           | and subtly) and anything non-trivial I have to do myself.
           | 
           | Great tool! Saves a ton of time! Not a dev replacement (yet)
        
             | MeImCounting wrote:
             | Now I am definitely doing much simpler things I would wager
             | than you folks are but I have found that with a bit of back
             | and forth you can get pretty good results that work with
             | only a bit of revision. I have found reminding it of the
             | purpose or goals of whatever it is youre working on at the
             | moment tends to make the output a bit more consistent
        
               | koreth1 wrote:
               | > with a bit of back and forth you can get pretty good
               | results that work with only a bit of revision
               | 
               | The problem for me is that the "back and forth" and "a
               | bit of revision" steps very often end up taking more time
               | than writing the code myself would have.
        
               | MeImCounting wrote:
               | Thats because you actually know what youre doing haha.
               | 
               | In all seriousness I am not a software engineer and GPT
               | has enabled me to build things in a couple weeks that
               | would have taken me months of effort to create otherwise.
               | 
               | I am sure an actual software engineer could have made
               | those same tools in a day or two but its still incredible
               | for my use case.
        
         | teaearlgraycold wrote:
         | There are posts on X about how powerful GPT4 is. And the videos
         | are really impressive. But then in my own experiences it's only
         | really good if you know what you're doing and can carefully
         | guide it into taking a single step in a process. Anything more
         | and the failure rate explodes upwards. I love using it as a
         | copilot (github copilot chat in vscode is great). But it's so
         | far from "singularity" that I don't fear for my job as a
         | programmer yet.
        
       | emporas wrote:
       | I remember back in March, using ChatGPT to generate code, i was
       | not impressed at all. I mean, the code was okey-ish, but nothing
       | that revolutionary.
       | 
       | Fast forward 3 months, i learned to use the tool, now i use it
       | all the time to generate code. There are some ways to use it, and
       | really shine as a tool.
       | 
       | 1) It works well for CLI tools. It knows thousands of linux
       | commands and can use them flawlessly most of the time. Any
       | recombining of existing command line tools, is easy as a breeze.
       | 
       | 2) Try to use it on a language with an as strict compiler as
       | possible. Rust is the most obvious and modern candidate. Untyped
       | or dynamically typed languages like Python and Angular are far
       | from ideal and should be avoided. Typescript is one more obvious
       | language of choice.
       | 
       | 3) Jargon of programming, and knowledge about libraries in the
       | programming language are very important. Being as specific as
       | possible about libraries, maybe even modules and functions makes
       | all the difference.
       | 
       | Unfortunately the third point rules out any amateur trying to use
       | code generation effectively.
       | 
       | The following days, i want to use GPT to create a notification
       | system for HN comments, like a daemon running in the background,
       | which downloads my comment page, saves all comments on a
       | database, and for every reply, it sends a notification, using
       | notify-send with the user and the first 10 words of the reply.
       | Maybe a subtle sound effect as well, like gmail, or facebook.
       | 
       | Does a tool like that exist? I have no doubt, GPT will excel at
       | this, not that difficult but still not trivial task.
        
         | [deleted]
        
       | arabello wrote:
       | > _Instead of thinking about codegen as a replacement for a
       | developer, what if we thought about it as an extension?_
       | 
       | Who would think otherwise? It seemed pretty clear from the
       | beginning that anything that automatically generates code would
       | act as an _assistant_ rather than a replacement
       | 
       | EDIT: Formatting
        
         | danielvaughn wrote:
         | Yeah the article seems to address a question that no one is
         | asking, unless maybe upper management.
        
         | [deleted]
        
       | IAmGraydon wrote:
       | I have to say this kind of wreaks of Figma feeling threatened.
        
         | [deleted]
        
       | kaycebasques wrote:
       | There's a few comments in here along the lines of "codegen is not
       | the correct term for describing what LLMs do". What is the
       | established term?
        
         | hashtag-til wrote:
         | I guess it should really be text generation or something else.
         | Not codegen please, that's taken.
        
         | [deleted]
        
         | verdverm wrote:
         | There is none
         | 
         | Many people call it code gen, I would wager that it is at least
         | a plurality that use "code gen" for what LLMs do.
        
       | whywhywhywhy wrote:
       | > developers, only 3% say that they highly trust the accuracy of
       | these (AI) tools
       | 
       | I don't believe this at all judging from how I've seen developers
       | using them the past few months. Seems a huge amount of trust in
       | them. Although I'd say the trust is earned, it's shocking how
       | often ChatGPT can completely nail a request.
        
         | darknavi wrote:
         | I think the key word is "highly".
         | 
         | I won't argue the metrics but while I generally trust Copilot
         | and ChatGPT, I would not say I "highly trust" them.
        
         | jdminhbg wrote:
         | I think they can be useful without saying you "highly trust"
         | them. If you moderately trust them, you can still let them do
         | grunt work that you'll know you have to look over before
         | committing.
        
         | TrueDuality wrote:
         | Use and trust are different things. I use Google Search, I
         | don't trust the first answer it gives me until I confirm it. I
         | use co-pilot, but I don't accept every request. I don't trust
         | them to be right, or even commonly right. Even the code
         | generation benchmarks for language models are sitting between
         | 30-50% correctness for the absolute best models. Would you
         | trust a lawyer that is right 30-50% of the time?
        
       | latenightcoding wrote:
       | Never seen anybody refer to chatgpt code as "codegen".
        
         | verdverm wrote:
         | Have you tried searching "chatgpt code generation"?
         | 
         | Many results
         | 
         | We talk about it in our Readme, under features > chat
         | 
         | > Combine LLM and Hof code gen for better, scalable results
         | 
         | https://github.com/hofstadter-io/hof
        
         | hashtag-til wrote:
         | I was puzzled by that as well and was surprised by the
         | employment of the "codegen" terminology here.
         | 
         | Well, it seems it will be another round of terminology abuse.
        
         | [deleted]
        
       | beaugunderson wrote:
       | Entire page crashes with "Application error: a client-side
       | exception has occurred (see the browser console for more
       | information)." if HTML5 autoplay is disabled.
        
         | [deleted]
        
       | galkk wrote:
       | I'm quite sure that LLM style things will be smart enough to
       | generate things like complicated decision trees and state
       | machines from human explanations, plus test suites and explaining
       | which cases have ambiguity and/or missing.
       | 
       | I'm still quite skeptical about generic codegen.
        
         | packetlost wrote:
         | > I'm quite sure that LLM style things will be smart enough to
         | generate things like complicated decision trees and state
         | machines from human explanations, plus test suites and
         | explaining which cases have ambiguity and/or missing.
         | 
         | I'm not.
        
         | ShamelessC wrote:
         | You have a very strange notion of the current strengths of
         | generative AI.
        
       | KronisLV wrote:
       | Even before LLMs, I think that codegen (the model driven variety)
       | was a good thing!
       | 
       | For example, you could throw a WSDL file at SoapUI and get an API
       | client that let you play around with it
       | (https://www.soapui.org/docs/soap-and-wsdl/working-with-wsdls...)
       | or client code for some frameworks
       | (https://www.soapui.org/docs/soap-and-wsdl/soap-code-
       | generati...), more or less what OpenAPI/Swagger is really nice
       | for nowadays: https://swagger.io/tools/swagger-ui/ and
       | https://swagger.io/tools/swagger-codegen/
       | 
       | You can also do the reverse, say, go from a live MySQL/MariaDB
       | database running somewhere or a script to a model of it by
       | reverse engineering (https://dev.mysql.com/doc/workbench/en/wb-
       | reverse-engineerin...), so that you can explore and change it in
       | a visual manner. You can take either that, or a model created
       | from scratch and either synchronize it with an existing schema,
       | or get the full set of SQL migrations for setting it up from
       | scratch: https://dev.mysql.com/doc/workbench/en/wb-design-
       | schema.html and https://dev.mysql.com/doc/workbench/en/wb-
       | forward-engineerin...
       | 
       | Codegen like that can even extend across architecture tiers. For
       | example, I can take a live database with said schema, connect
       | JetBrains Rider with an EntityFramework plugin (using ASP.NET and
       | C# here as an example, though similar solutions exist in Java and
       | other tech stacks) and generate a set of entities with mostly
       | correct data types and relation mappings automatically:
       | https://blog.jetbrains.com/dotnet/2022/01/31/entity-framewor...
       | 
       | Not only that, but OpenAPI/Swagger codegen is integrated in
       | ASP.NET so I can also end up with a web based UI to test any APIs
       | that I might make, should I opt to create controllers that use
       | those entities.
       | 
       | While most of the codegen I've seen in an academic context has
       | been more or less a mess (broken Eclipse plugin based tools),
       | practical approaches like this are wonderful - for the more
       | boring and boilerplate stuff, I can basically draw a few boxes
       | and get bunches of SQL and C# code that I can then change and/or
       | fine tune as necessary, using the generated stuff as a basis
       | (even though re-generating it would probably overwrite the
       | changes).
       | 
       | LLMs feel like the logical next step: feed in a bunch of projects
       | and language documentation and you can query the LLM with various
       | questions about how to do something, to at least sometimes send
       | you on the correct search path yourself without having to jump
       | around 15 different documentation pages, maybe only 5 will
       | suffice now. I use ChatGPT fairly liberally for my personal
       | projects and while it's no silver bullet, it feels like a value
       | add to me, since there's a surprising amount of boilerplate out
       | there for the boring problems that I solve, with every framework
       | and language solving the same stuff in slightly different ways.
        
       | Pannoniae wrote:
       | This article is so misleading.... when someone talks about
       | codegen, it usually means AOP, bytecode manipulation, compiler
       | plugins and the sorts. Not "AI tools".
       | 
       | What is even worse is that this isn't even a proper article, this
       | is just a thinly concealed advertisement.
        
         | bafe wrote:
         | I opened the link expecting an article criticising annotation
         | processors or macros, not a vague writeup on LLM/AI code tools
        
         | verdverm wrote:
         | This is an outdated perspective, most people think differently
         | now (and very little of longer established code gen tasks)
        
         | mlinsey wrote:
         | > when someone talks about codegen, it usually means AOP,
         | bytecode manipulation, compiler plugins and the sorts. Not "AI
         | tools"
         | 
         | I have found the opposite to be true, starting this year. You
         | can commiserate with people who are interested in cryptography.
        
           | ReactiveJelly wrote:
           | Every year I commiserate harder
           | 
           | - "Dexter" is no longer "Dexter's Lab" (It's Darkly Dreaming
           | Dexter)
           | 
           | - "Dark Descent" is no longer "Amnesia" (It's some other
           | game)
           | 
           | - "Server" no longer means "server" (It means a Discord
           | guild)
           | 
           | - "Serverless" now means "On a server"
           | 
           | - "Operating system" sometimes means "web app"
           | 
           | - "Powered by" has nothing to do with power
           | 
           | And Internet speeds are still measured in bits, in case
           | anyone is not running the latest System/360
        
             | bigstrat2003 wrote:
             | Don't forget "literally" which people have rendered useless
             | by using it to mean "figuratively". And even the dictionary
             | allowed that misuse of the language. :(
        
               | krapp wrote:
               | Literally has _always_ meant figuratively[0]. That is not
               | a misuse of the language, that is how the language is
               | used and has been used since at least 1769.
               | 
               | [0]https://blogs.illinois.edu/view/25/96439
        
               | bigstrat2003 wrote:
               | I don't know how you figure that. Up until the last 10
               | years, when someone said "I literally (did $thing)" it
               | was a statement that they actually did whatever it was,
               | regardless of how unlikely it might seem. That is the
               | polar opposite of how people (especially the kids, who
               | should get off my lawn) use it today. It is a misuse of
               | language, plain and simple.
        
               | derrikcurran wrote:
               | I've always seen it more as exaggeration/absurdity for
               | the sake of humor than ignorant misuse of the word. Like
               | in a cartoon when a character's eyes _literally_ pop out
               | of their head. Of course, even if that's true, I'm sure
               | the nuance would sometimes be lost in translation and
               | help blur the definition in common vernacular just the
               | same.
        
               | krapp wrote:
               | No, "literally" has been used to mean "figuratively" as a
               | common English idiom for centuries, and there is plenty
               | of documentary proof to that effect, including the
               | specific year I mentioned (1769) being the first use of
               | "literally" as "figuratively" attested to in print. This
               | isn't something "the kids" came up with ten years ago, or
               | that I just made up.
               | 
               | Here. Here's another article about it[0]. Like all
               | language prescriptivists, you're simply wrong.
               | 
               | [0]https://www.thecut.com/2018/01/the-300-year-history-
               | of-using...
        
               | bigstrat2003 wrote:
               | Ok bro. I don't know why you're being so hostile. But
               | I'll just say that I disagree that your sources
               | persuasively back up your argument. Hell, the second one
               | is completely irrelevant. It's just trying to come up
               | with an excuse to complain about (non-existent) sexism,
               | and has no good argument whatsoever. Your first source is
               | better, but I still don't agree it substantiates your
               | claim.
               | 
               | Oh, and since we're apparently trading insults about
               | linguistic views: like all linguistic descriptivists, you
               | don't know what words mean. ;)
        
             | fragmede wrote:
             | Oh, and clouds are now where other people's computers live,
             | and not full of rain and lightning, both of which aren't
             | friends to computers.
             | 
             | Jason isn't my friend anymore, but a data interchange
             | format.
             | 
             | Neither is Kate, she's how I orchestrate my containers.
             | 
             | Ghosting doesn't involve poltergeists or a Halloween, Zoom
             | isn't a function on my camera, and fishing steals my data
             | and doesn't get me delicious salmon.
        
         | robertlagrant wrote:
         | > when someone talks about codegen, it usually means AOP,
         | bytecode manipulation, compiler plugins and the sorts
         | 
         | This feels like a JVM-land interpretation? I think it's quite
         | domain-specific.
        
           | nightpool wrote:
           | The examples GP gave were java-specific, but the concept of
           | codegen meaning "deterministic generation of code in a
           | 'compile' phase" is universal--think of OpenAPI libraries, Go
           | pre-generics, Babel, etc.
        
           | Pannoniae wrote:
           | Yeah, sorry about that. If I was more familiar with native
           | codegen tools I would have brought examples from them as
           | well. Hopefully I still got my point across.
        
             | robertlagrant wrote:
             | No worries - all I mean is that you, from Java domain,
             | interpret the word one way. Different domains interpret it
             | very differently.
        
           | ReactiveJelly wrote:
           | Not just JVM. Rust also has a codegen step in its compiler
           | pipeline https://doc.rust-lang.org/rustc/codegen-
           | options/index.html
        
         | imachine1980_ wrote:
         | i don't, not at least this year, but being figma i was thinking
         | it was about asm.js codegen or webassembly code generations.
        
         | [deleted]
        
         | leoqa wrote:
         | I'm in between: I think of protoc and grpc bindings, or old
         | school Go generics.
        
           | mikepurvis wrote:
           | The same-- I'm in a long-time world of generated message
           | headers for ROS, so "codegen" to me is that
           | add_custom_command/add_custom_target dance you do with CMake
           | to get it to run a non-compiler tool ahead of actual
           | compilation.
        
           | chrsig wrote:
           | Without additional context, this is my default understanding.
           | If it were in a compiler development conversation, I'd think
           | of outputted machine code.
        
         | diogenes4 wrote:
         | > This article is so misleading.... when someone talks about
         | codegen, it usually means AOP, bytecode manipulation, compiler
         | plugins and the sorts. Not "AI tools".
         | 
         | This is an odd take to me when LLMs are very, very good at
         | generating code, which has got a _lot_ of attention recently.
         | Sure it may be a different beast from what we 've currently
         | identified as "codegen" but it remains a descriptive term for
         | the code-generating technology.
        
           | SomeCallMeTim wrote:
           | > LLMs are very, very good at generating code
           | 
           | Ummm.... Awful code that often looks right at first glance,
           | maybe.
           | 
           | Maybe LLMs can generate the kind of code that's really
           | shallow in its complexity, but for literally everything I
           | would call interesting LLMs have produced hot garbage. From
           | "it doesn't quite do what I want" to "it couldn't possibly
           | work and it's extremely far from being sane," though it
           | always _looks_ reasonable.
        
             | diogenes4 wrote:
             | > Ummm.... Awful code that often looks right at first
             | glance, maybe.
             | 
             | > Maybe LLMs can generate the kind of code that's really
             | shallow in its complexity, but for literally everything I
             | would call interesting LLMs have produced hot garbage. From
             | "it doesn't quite do what I want" to "it couldn't possibly
             | work and it's extremely far from being sane," though it
             | always looks reasonable.
             | 
             | None of this has any bearing.
        
           | ynx wrote:
           | Words mean things, especially when communicating in
           | professional or semi-professional settings with terminology
           | with longstanding meaning. To not do so is poor
           | communication, and irresponsible depending on the stakes.
           | 
           | Codegen's long history of macro-fied (or similar kind of
           | scripting) source writing matters here. It has the very
           | important property of having consistent/deterministic output
           | from a process that can be verified with very high confidence
           | by rudimentary human inspection.
           | 
           | It's like calling autocomplete "codegen".
           | 
           | I truly love LLM-assisted coding. I would never call it
           | codegen, and think it can even be unethical to do so when the
           | stakes are high, because it gives it the veneer of
           | trustworthiness that lends one to carelessly not audit it.
           | 
           | LLM-aided code writing should, IMO, be called "code assist",
           | not "codegen".
        
             | pvg wrote:
             | I think this is more a matter of familiarity and habit -
             | notice how there are several slightly different definitions
             | of exactly what 'codegen' means in this thread alone. It
             | doesn't really have some super-specific meaning, it's not,
             | dunno 'lexical scoping'. It's still just shorthand for
             | 'code generation' and it's not unreasonable to apply it to,
             | well, most automated processes that generate code.
        
               | ynx wrote:
               | I don't buy a nihilistic "does anything mean anything
               | anymore?" approach.
               | 
               | I'm basing my evaluation on Copilot. It functions as a
               | highly context-sensitive and very useful autocomplete, so
               | the "autocomplete" label is a complete, accurate, and
               | precise description for it.
               | 
               | It does not fill the same role as writing repeatable
               | macros, batch scripts in part of the build process, token
               | manipulation, or joining a tabular dataset with
               | templates...all components associated with traditional
               | code generation.
               | 
               | In the most strict pedantic sense, yes, LLMs create lines
               | code via their internal processes, and that could be
               | called "code generation" at a technical level. But they
               | serve different needs, with different techniques, and
               | different interfaces.
               | 
               | In terms of common parlance, no PM or manager who is not
               | pants-on-head dumb is going to suggest replacing scripts
               | that generate code 1:1 with LLM output, unless the
               | terminology itself has confused them into thinking they
               | fill the same role.
               | 
               | Poor communicators, whether because they are overly
               | pedantic or undertrained, may make this mistake. But in a
               | team environment, if "codegen" is considered a valid name
               | for LLM output, an effective communicator is _always_
               | going to need to clarify which they mean, because the
               | tasks are not interchangeable in the least.
        
               | pvg wrote:
               | My argument is that it's not a precise technical term of
               | art. There are many of those but this isn't one.The
               | evidence you can easily see in thread and in pretty wide
               | usage on the internet. Your argument is 'this is
               | nihilism' which is mostly vibes and grumpy vibes at that.
        
         | eyelidlessness wrote:
         | Yeah, I was really confused by the article at first. I
         | understand that the term must have evolved, but I hadn't
         | heard/seen it used this way until now. I'm already skeptical of
         | codegen in the more conventional sense, and definitely
         | skeptical of AI code generation... so I guess now I can carry
         | on as usual, with a slightly elevated "I'm probably not the
         | target audience" filter?
        
       | VyseofArcadia wrote:
       | You keep using that word. I don't think it means what you think
       | it means.
       | 
       | No, seriously, codegen already has a specific technical meaning.
       | AI generated code vomit needs to be called something else.
        
         | shadowgovt wrote:
         | What is that specific technical meaning? A quick Google search
         | suggests definitions for the term are all over the map;
         | Wikipedia aliases it to "code generation," which is a whole
         | family of processes (including program synthesis and model-
         | driven development).
         | 
         | I'm not seeing anything immediately obvious to exclude AI-
         | synthesized code from the "codegen" label.
        
           | conradludgate wrote:
           | https://en.m.wikipedia.org/wiki/Code_generation_(compiler)
           | it's the final stage of compilation. Turning the in abstract
           | machine intermediate representation into machine code
        
             | verdverm wrote:
             | Most words have multiple definitions, code gen included.
             | 
             | Take for example "go gen", React or Markdown to html, Helm
             | templates, generics, Lisp, ... people think of these as
             | code generation.
             | 
             | Taken further, both humans and LLMs generate code based on
             | fuzzy statements
             | 
             | A transpiler is doing code gen as well, very similar to
             | your link, but a different, typically text based, target
        
               | aerzen wrote:
               | That's exactly the problem - codegen means a lot of
               | things. You just talk about codegen and expect people to
               | understand that you mean the AI vomit.
        
               | hu3 wrote:
               | With regards to the article, this is in the first part of
               | it:
               | 
               | Codegen is the process of generating code automatically,
               | based on a defined set of rules or specifications.
               | There's a wide ecosystem of codegen tools, including:
               | 
               | - Simple code completion in an integrated development
               | environment (IDE), like Microsoft's IntelliSense feature
               | 
               | - Templates for repeating code patterns, like code
               | snippets in Figma
               | 
               | - Visual programming and no-code tools, like Bubble
               | 
               | - Modern AI-based codegen systems, like GitHub Copilot
               | and Replit Ghostwriter
        
               | hashtag-til wrote:
               | I find many of recent articles and franky, many github
               | projects, are increasingly looking desperate to grab the
               | hype, likes and stuff.
               | 
               | It's not even thought provoking. Just boring blatant
               | marketing and show off.
        
               | verdverm wrote:
               | > the AI vomit
               | 
               | You don't need to use such biased language, it only hurts
               | your argument
        
               | johnnyworker wrote:
               | A"I" vomit then?
               | 
               | And how would that hurt the argument? It's not royalty
               | where being offended in combination with power could be
               | brought to bear. If you replace "AI vomit" with "the
               | product of these fascinating achievements of the human
               | mind and persistent experimentation" in a sentence, it
               | chances not thing important about rest of the sentence.
        
               | verdverm wrote:
               | "vomit" is the problematic word in the context. The
               | comment author puts their bias on display.
               | 
               | Why should I take any arguments from an author seriously
               | when they layer their bias on?
               | 
               | They could have replaced "vomit" with "output" and have
               | had a better statement
               | 
               | This is besides the fact that they claimed I was
               | referring only to AI output as what I mean by code gen,
               | when I clearly listed many types of code gen. Again, they
               | demonstrate bias and poor argumentation skills.
        
               | VyseofArcadia wrote:
               | I said vomit and I meant vomit. I have an opinion about
               | this, and I intended to convey that opinion. I'm not
               | trying to avoid the appearance of bias.
        
               | verdverm wrote:
               | ummm, where did you say "vomit" before this most recent
               | comment?
               | 
               | You've only contributed the top-level comment in this
               | branch of the thread
        
               | johnnyworker wrote:
               | > Why should I take any arguments from an author
               | seriously when they layer their bias on?
               | 
               | Because they're not referring to private experiences, but
               | the the shared world. It's kind of rich to talk about
               | "argumentation skills" while talking your personal need
               | of taking arguments from "an author" seriously or not.
               | Who cares? Then don't take them seriously, takes nothing
               | away from them.
               | 
               | > I clearly listed many types of code gen
               | 
               | "Codegen is best for augmenting your design to
               | development process, not automating it.", "AI-based code
               | generation (codegen", "Instead of thinking about codegen
               | as a replacement for a developer", "codegen can speed up
               | your handoff workflow by making suggestions"
               | 
               | And so on. All throughout the article you use that word
               | to mean one thing and one thing only. Or put differently,
               | "Why should I take the arguments of someone seriously who
               | doesn't know even know the article they wrote?"
        
               | verdverm wrote:
               | > All throughout the article you use that word to mean
               | one thing and one thing only.
               | 
               | What are you on about? I did not write the article
               | 
               | I was referring to my own comment:
               | https://news.ycombinator.com/item?id=37692519, to which
               | the reply with "vomit" misrepresented what I had written
               | 
               | Maybe try not attacking people when you are going to make
               | basic mistakes
        
             | hu3 wrote:
             | https://en.m.wikipedia.org/wiki/Automatic_programming#Sourc
             | e...
             | 
             | Still code generation according to Wikipedia:
             | 
             | > Source-code generation is the process of generating
             | source code based on a description of the problem[9] or an
             | ontological model such as a template and is accomplished
             | with a programming tool such as a template processor or an
             | integrated development environment (IDE). These tools allow
             | the generation of source code through any of various means.
        
         | [deleted]
        
       | tikkun wrote:
       | Did anyone else find the layout of the article to be quite tough
       | to read and confusing?
        
         | erksa wrote:
         | The font choice in the images made the call outs extremely hard
         | to read for me.
        
         | [deleted]
        
         | the_sleaze9 wrote:
         | Absolutely - almost nauseated actually.
         | 
         | The actual content felt a bit tortured as well which didn't
         | help anything.
        
       | pipeline_peak wrote:
       | [flagged]
        
       ___________________________________________________________________
       (page generated 2023-09-28 23:01 UTC)