[HN Gopher] AI is stifling new tech adoption?
       ___________________________________________________________________
        
       AI is stifling new tech adoption?
        
       Author : kiyanwang
       Score  : 394 points
       Date   : 2025-02-14 12:45 UTC (10 hours ago)
        
 (HTM) web link (vale.rocks)
 (TXT) w3m dump (vale.rocks)
        
       | ZaoLahma wrote:
       | Seems plausible, especially in combination with the AI-coma that
       | occurs when you tab-complete your way through problems at full
       | speed.
        
       | _as_text wrote:
       | know what this will be about without reading
       | 
       | Python 3.12-style type annnotations are a good example imo, no
       | one uses the type statement because dataset inertia
        
         | bigfishrunning wrote:
         | Usually, i remember that type annotations exist when I'm
         | debugging after things aren't working. If you look at the
         | python code that I've written, type annotations are a sure sign
         | that "there was a problem here". it's like a scar on repaired
         | code
        
         | DrFalkyn wrote:
         | Type annotations by themselves, are little more than a comment
        
           | OutOfHere wrote:
           | They're more than that as they allow for easy consistency
           | checking of types across calls. This makes all the
           | difference.
        
       | jimnotgym wrote:
       | Is this such a bad result? Do we need office CRUD apps to use
       | bleeding edge technologies?
        
         | Manfred wrote:
         | It's also a problem when adopting new functionality in existing
         | frameworks (eg. upgrading an app to new Android release),
         | dropping the use of deprecated functionality, taking advantage
         | of more readable syntax in programming languages, etc.
        
           | 9rx wrote:
           | _> taking advantage of more readable syntax in programming
           | languages_
           | 
           | "AI" is the programming language here. The readability of any
           | lower level language(s) that may exist as a compiler target
           | of the "AI" is about as important as how readable assembly
           | language is to someone writing software in Rust.
        
         | jtbayly wrote:
         | If they don't, then will the bleeding edge technologies ever be
         | used by more serious apps? Will it even be ready if we don't
         | test it with easier use-cases?
        
         | benrutter wrote:
         | Probably not, but I guess the worry would be, if nobody adopts
         | and uses them, bleeding edge technologies don't become the new
         | normal. Unless you think we've reached perfection, it's almost
         | guaranteed that future developers will look at React/Python/etc
         | as we look at developing in Assembly or COBOL.
        
       | jgalt212 wrote:
       | Along similar lines, I found Google auto complete to constrict my
       | search space. I would only search the terms that auto complete.
        
       | physicsguy wrote:
       | If AI stifles the relentless churn in frontend frameworks then
       | perhaps it's a good thing.
        
         | ZaoLahma wrote:
         | It isn't only frontend frameworks.
         | 
         | I currently AI-coma / tab-complete C++17 with decent results
         | for stuff ridiculously far away from frontend, but I do wonder
         | who is providing the training data for C++23 and onwards as
         | there isn't wide adaptation yet.
        
           | kannanvijayan wrote:
           | I can chime in with a similar anecdote. I use co-pilot
           | extensively in "fancy tab completion" mode. I don't use the
           | conversational features - just the auto-complete to help my
           | coding along.
           | 
           | I specifically found it very useful when dealing with a bunch
           | of boilerplate C++ shim code that used some advanced template
           | magic to declare families of signatures for typed thunks that
           | wrapped and augmented some underlying function call with
           | record/replay log.
           | 
           | It was arcane, bespoke stuff. Highly unlikely to be imitated
           | in training data. The underlying code was written by a
           | colleague (the CTO) but it was hard to grok just because of
           | all the noise the template magic added to the syntax, and
           | carefully keeping track of what was getting substituted
           | where.
           | 
           | The changes I was making were mostly to add new shims to this
           | collection of shims, and co-pilot happily used the pattern of
           | shims in previous lines of code to extrapolate what my new
           | shims should look like and offer me tab-completions that were
           | sensible.
           | 
           | That included some bad guesses (like inferred names for other
           | templated forms that referred to different argument types),
           | but overall it was structurally close enough to use as a
           | reference point and fix up as needed.
           | 
           | It was really great. Took what would have been about day's
           | worth of work carefully figuring out from first principles
           | how the template system was structured.. and made it into
           | about half an hour of "get it to generate the next shim,
           | verify that the shim does the right thing".
        
             | GrinningFool wrote:
             | > It was really great. Took what would have been about
             | day's worth of work carefully figuring out from first
             | principles how the template system was structured.. and
             | made it into about half an hour of "get it to generate the
             | next shim, verify that the shim does the right thing".
             | 
             | That also seems to highlight the disadvantage too - if
             | you'd taken a day, you would have come away with a deeper
             | understanding of the template system.
        
         | weweersdfsd wrote:
         | I agree. Most frontend rewrites are totally unnecessary, caused
         | by resume-driven development, desire for the latest fancy thing
         | and most JS frameworks having a short lifespan. If LLM's reduce
         | that behavior by steering devs towards most popular solutons,
         | then it's only a good thing.
        
       | CharlieDigital wrote:
       | As the saying goes:                   while (React.isPopular) {
       | React.isPopular = true         }
       | 
       | It's actually quite sad because there are objectively better
       | models both for performance and memory including Preact, Svelte,
       | Vue, and of course vanilla.
        
         | chipgap98 wrote:
         | Does that really matter to most companies/developers? I'd much
         | rather have a good enough solution with a large ecosystem built
         | around it. It also takes a lot of investment for companies to
         | change their tech stack
        
           | CharlieDigital wrote:
           | > Does that really matter to most companies/developers?
           | 
           | If you're asking about performance and memory, then yes, it
           | does.
           | 
           | This is especially true in e-commerce where many studies have
           | shown that overall page performance has a correlation to
           | conversion. Add to that the fact that a lot of e-commerce has
           | moved to mobile web, there's a strong case for picking the
           | best performing technologies versus developer preference --
           | especially if AI is generating it.
           | 
           | But even outside of e-comm, consider government websites
           | where you may have low income users that are using cheaper,
           | older, lower capability devices to access information and
           | services using lower speed networks to boot.
           | 
           | I do my day-to-day work on an M3 Max with 64GB RAM and fiber
           | to the home; it's easy for developers to forget that many
           | times, their end users can be on older devices, on low
           | performing networks, and other factors that affect
           | performance and usability of web applications.
           | > ...with a large ecosystem built around it
           | 
           | When you can generate whatever you want nearly instantly,
           | what does "ecosystem" mean? Your possibilities are endless
           | and infinite. Your mindset is in a world where it's necessary
           | to depend on the work of others because it's too expensive
           | for you to write your own component library. Yes, that was
           | true 1 year ago; why would you waste time and energy to
           | create your own calendar component? But if an LLM can
           | generate any bespoke component that you need in < 3 seconds,
           | do you still need a third party component library?
           | 
           | In fact, you may be better off for _not_ creating the added
           | external dependency.
        
             | nottorp wrote:
             | > I do my day-to-day work on an M3 Max with 64GB RAM and
             | fiber to the home
             | 
             | And you still pay a few cents extra in power used because
             | of all those inefficient and memory hungry "applications".
             | You just don't notice.
        
             | madeofpalk wrote:
             | So, if you're a large e-commerce company that's trying to
             | juice out last percentage points of conversion, and are
             | researching alternate javascript libraries, is it plausable
             | that the only research a development team would be to ask
             | ChatGPT?
        
               | CharlieDigital wrote:
               | Unfortunately not the case because of GPT's bias towards
               | React (the point of the article).
        
             | SiliconSplash wrote:
             | > If you're asking about performance and memory, then yes,
             | it does.
             | 
             | Most places just don't care. I've worked 15 years as a
             | contractor and only in once place have the business cared
             | about optimisation. As long as it wasn't unbearable than it
             | was "good enough".
             | 
             | > This is especially true in e-commerce where many studies
             | have shown that overall page performance has a correlation
             | to conversion. Add to that the fact that a lot of
             | e-commerce has moved to mobile web, there's a strong case
             | for picking the best performing technologies versus
             | developer preference -- especially if AI is generating it.
             | 
             | This may have been true back in 2014. 5G networks are
             | pretty fast and the the mobile web is pretty bloated.
             | Performance is way down the list of concerns typically even
             | by places that should care. I can write blazingly fast
             | custom JS frameworks, the number of times anyone cares is
             | exactly one time.
             | 
             | > I do my day-to-day work on an M3 Max with 64GB RAM and
             | fiber to the home; it's easy for developers to forget that
             | many times, their end users can be on older devices, on low
             | performing networks, and other factors that affect
             | performance and usability of web applications.
             | 
             | I have a 2010 Dell E6410 with 8GB of ram and an i7 640M
             | (Dual Core, 4 thread). Almost every modern phone is faster
             | now.
             | 
             | I am not arguing we should make things bloated. I am just
             | saying there isn't an incentive to optimise for low end
             | devices because low end is better than a reasonably power
             | Business Laptop of 10-15 years ago.
             | 
             | > why would you waste time and energy to create your own
             | calendar component? But if an LLM can generate any bespoke
             | component that you need in < 3 seconds, do you still need a
             | third party component library?
             | 
             | The code from the LLM probably hasn't been battle tested.
             | The open source react component library with 1000s of stars
             | on github definitely has been. If you run into a problem
             | with the LLM code you are probably going to be by yourself
             | fixing it. I will take the component library over the LLM
             | code everyday of the week.
        
               | breckenedge wrote:
               | Have you ever worked for a place that cared about meeting
               | CWV? Poor JS performance will definitely hurt rankings.
        
             | tcoff91 wrote:
             | If you use next.js with react server components you can get
             | enough performance out of react for e commerce.
             | 
             | Also the react compiler is improving client side
             | performance as well by automatically memo-izing everything
             | to reduce rerenders
        
               | CharlieDigital wrote:
               | The auto-memoization is:
               | 
               | 1) Trading memory pressure for performance
               | 
               | 2) An admission of a broken model because it's taken them
               | 2+ (almost 3?) years to build as a recognition that
               | developers can't get it right
               | 
               | The reason other frameworks don't need this is because
               | they use signals connected to callbacks instead of the
               | reactive callback pointing to the component (as React
               | does). Thus Preact, Solid, Svelte, Vue, etc. do not have
               | this issue and you rarely (if ever?) have to manually
               | memoize.
               | 
               | The React team foot-gunned themselves and every React dev
               | out there.
               | 
               | I have some examples that walk through this concept using
               | JSFiddle so you can understand exactly why this design
               | choice from React created this problem in the first
               | place: https://chrlschn.dev/blog/2025/01/the-inverted-
               | reactivity-mo...
        
           | wrsh07 wrote:
           | Fwiw, I work at a saas company and we do have some
           | performance issues. It's about 50/50 split between not using
           | react optimally and slow backend.
           | 
           | If we were using svelte we would still have performance
           | issues, but they would probably be centered more on "is the
           | data in a sane shape such that this page can query it?"
        
         | spacephysics wrote:
         | React has become the Java of late 90's to mid 2000's.
         | 
         | Loads of libraries, documentation, and developers which creates
         | a flywheel that will grow those aspects over the next X years.
         | 
         | Until something comes up that is magnitude better in
         | performance/maintainability, and even then it'll take years to
         | dethrone the status quo.
         | 
         | Good questions in these comments essentially asking, does the
         | level of training data on these models now contribute to the
         | inertia we see from libraries, documentation, developer
         | support?
         | 
         | I believe so, but then again I think we'll soon have more niche
         | models for specific areas of development (like openart has with
         | a variety of image gen models)
        
           | marcosdumay wrote:
           | Most of the alternatives on the GP's comment have more than
           | one order of magnitude better performance.
           | 
           | Maintainability doesn't even enter the question. React's way
           | is to rewrite. All of the alternatives on the GP's comment
           | are possible to maintain.
        
           | re-thc wrote:
           | > React has become the Java of late 90's to mid 2000's.
           | 
           | Not comparable.
           | 
           | Java may not have had big releases during this time but there
           | were patches and support. Java 8 had numerous versions
           | (>400). You can get paid support from Sun/Oracle. In terms of
           | frameworks Spring was constantly upgraded and supported.
           | 
           | React has none of these. Older React versions rarely get
           | upgraded and just look at the amount of minor/patch releases
           | React gets these days. It's almost as though Meta no longer
           | cares. Earlier (<16) React was constantly updated. Nowadays
           | it's just to peddle Vercel.
        
           | tobyhinloopen wrote:
           | At least Java is good. React is absolutely terrible.
        
         | 9rx wrote:
         | But equally impressive that the Javascript community has
         | actually managed to continue to use a single framework for more
         | than five minutes without jumping to the next.
        
           | trgn wrote:
           | had to have hooks though. And vanilla-OO had to go in favor
           | of trapping state in closures, which is cooler because it has
           | functions and not methods.
        
             | Hasu wrote:
             | You can still use vanilla-OO React.
             | 
             | Do you complain when other frameworks add new features
             | without breaking backwards compatibility?
        
               | trgn wrote:
               | I certainly did complain! And I'm sure I'll do it again
               | if the new features aimed to supplant the old ones are
               | worse.
               | 
               | If you search anything about React now, 90% of the docs
               | are hook-based. Beginners of React in 2025 will be guided
               | to use a default pattern which has worse runtime
               | footprint and adds a whole suite of new tool-specific
               | coding guidelines ("rules of hook"). After years with it,
               | I struggle to see what it has added in terms of building
               | front-end SPas, yet the pattern is now the default for
               | all using React.
        
               | CharlieDigital wrote:
               | > You can still use vanilla-OO React
               | 
               | What we want is signals-based React because it would
               | singularly fix the issue that the compiler is solving for
               | and remove the need to even have `useMemo` and
               | `useCallback`, improve performance, and remove a huge
               | footgun.
               | 
               | Because it has such a huge marketshare, a fix like this
               | would have tremendous positive effects in web dev. We are
               | all "forced" to use React so it would do all of us a
               | great service if the team would just backstep and say _"
               | Oops, we made a design mistake here!"_. Instead, they
               | spent almost 3 years building the compiler to sprinkle in
               | memoization primitives because enough devs cannot get it
               | right.
        
           | Hasu wrote:
           | React is almost 12 years old and has dominated frontend
           | development for almost a decade. I'd bet most JS backend
           | projects still use Express as the webserver, and it's even
           | older than React.
           | 
           | Can we please retire this meme? It's stale and adds nothing
           | to the conversation.
        
             | tcoff91 wrote:
             | Yes thanks to React we can finally retire this meme. It was
             | very true before react became dominant though.
        
         | onion2k wrote:
         | _and of course vanilla_
         | 
         | That depends on who is writing it and what the app is. Most
         | frontend code is written by people who don't have as much time
         | to focus on performance and optimization as core framework
         | developers, so their once their apps reach a critical mass of
         | 'actually big enough to benefit from a framework' the app is
         | worse than it would have been if it was written with a
         | framework in the first place.
         | 
         | The problem for all of us, and where frameworks often make the
         | web suck, is that very few apps are actually that big. Frontend
         | developers love to put React in a page that has one form input
         | a button, which is dumb.
        
           | CharlieDigital wrote:
           | Ostensibly, an unbiased, well-trained model would solve this
           | because it would/could write fast, performant vanilla. As a
           | product owner, you probably care more about the performance
           | of the product over the specific technical implementation
           | details.
           | 
           | I take that to be the point of the article: the bias towards
           | React and of course the training data being stale means that
           | generated code will always have a bit of staleness and as we
           | provide less context for the AI (e.g. StackOverflow), the
           | bias towards staleness will amplify given the large body of
           | stale information it has ingested and amalgamated.
        
         | sesm wrote:
         | Doesn't Preact use the same model but prioritise bundle size
         | over performance?
        
           | CharlieDigital wrote:
           | It's not the same model.
           | 
           | Many, many (if not most) devs probably do not realize that
           | React has an "inverted" model of reactivity and is in fact
           | the root cause of it's performance woes.
           | 
           | To the extent that the React team spent 2+ (almost 3?) years
           | working on a compiler to address the issue by adding in the
           | correct memoization primitives in a compile phase (trading
           | increased memory consumption for more runtime
           | performance...).
           | 
           | I wrote about it here with code examples that work in
           | JSFiddle: https://chrlschn.dev/blog/2025/01/the-inverted-
           | reactivity-mo...
           | 
           | The short of it is that by pointing the reactive callback to
           | the component function, it means that state within the
           | component function has to be managed carefully. This doesn't
           | happen in Vanilla, Preact, Solid, Svelte, and Vue because
           | they point the reactive callback (via "signals") to a handler
           | function that captures the component state in a closure. This
           | is also why they are all faster and consume less memory than
           | React.
           | 
           | Because React points the reactive callback to the component
           | function, it effectively starts from a clean slate each re-
           | render so the purpose of React Hooks is to move state out and
           | inject them back (thus they are called "hooks") when it re-
           | renders. In Preact, this is not the case since it uses
           | signals: https://preactjs.com/guide/v10/signals/
           | 
           | A short video of the same examples if you prefer:
           | https://youtu.be/7OhyP8H7KW0
        
             | sesm wrote:
             | The blogpost in the first link doesn't mention Preact at
             | all.
             | 
             | Preact is mostly API-compatible with React, and it having a
             | different underlying model is an extraordinary claim that
             | requires extraordinary evidence.
             | 
             | I've read the docs on Preact signals, and they look like
             | React refs, but put outside of components.
             | 
             | edit: the last paragraph about refs
        
               | CharlieDigital wrote:
               | > ...a different underlying model is an extraordinary
               | claim that requires extraordinary evidence
               | 
               | You are looking at the syntax and not the reactivity
               | model (how it determines what functions to call when a
               | change happens).
               | 
               | The post doesn't need to mention Preact because every
               | other framework is signals-based except for React. Vue is
               | simply the stand-in for "signals-based reactivity". Vue
               | has different syntax from Preact (though it, too, can
               | also use JSX), but it has the same reactivity principle.
               | 
               | https://preactjs.com/guide/v10/signals/
               | > What makes Signals unique is that state changes
               | automatically update components and UI in the most
               | efficient way possible. Automatic state binding and
               | dependency tracking allows Signals to provide excellent
               | ergonomics and productivity while eliminating the most
               | common state management footguns.
               | 
               | It uses the same underlying reactivity model as Vue,
               | Svelte, Solid, and Qwik.
               | 
               | Vue docs: https://vuejs.org/guide/extras/reactivity-in-
               | depth.html#conn...
               | 
               | Solid docs:
               | https://www.solidjs.com/docs/latest/api#createsignal
               | 
               | Svelte: https://svelte.dev/blog/runes#Signal-boost
               | 
               | Qwik docs:
               | https://qwik.dev/docs/components/state/#usesignal
               | 
               | In the blog post, Vue is the stand-in for signals-based
               | reactivity. All signals-based reactivity models work the
               | same way at a high level (their difference being
               | primarily in their low-level DOM diff and update).
               | 
               | My prediction is that even React will eventually end up
               | signals based because of TC39 Signals:
               | https://github.com/tc39/proposal-signals
        
               | sesm wrote:
               | Preact is not signals-based, it uses the React model with
               | state and props as the basic model, but also provides
               | Signals as a parallel model.
        
               | CharlieDigital wrote:
               | I guess that's like arguing a Prius Prime isn't electric
               | because even though it has a battery and can drive all
               | electric, it also has a gas engine. :shrug:
               | 
               | But very clearly, Preact has the option of the exact same
               | reactivity primitive and design as Vue, Solid, Qwik, and
               | Svelte
               | 
               | https://preactjs.com/guide/v10/signals/
               | > In Preact, when a signal is passed down through a tree
               | as props or context, we're only passing around references
               | to the signal. The signal can be updated without re-
               | rendering any components, since components see the signal
               | and not its value. This lets us skip all of the expensive
               | rendering work and jump immediately to any components in
               | the tree that actually access the signal's .value
               | property.
        
               | sesm wrote:
               | I'll accept an analogy with a hybrid car. In this
               | analogy, Preact would be a gas car with an additional
               | electric engine bolted on as an afterthought.
        
               | CharlieDigital wrote:
               | But you know, the Prius Prime is a PHEV as in "plug in
               | hybrid electric vehicle"
        
               | sesm wrote:
               | Oh, I see what you mean now! This sounds like a very good
               | analogy.
        
         | klysm wrote:
         | I think react hits a really good sweet spot in the trade off
         | space. Sure it's not the best thing that can exist, but it
         | really does solve a lot of problems in a way that isn't overly
         | restrictive.
         | 
         | My personal opinion is that a lot of the hate directed at react
         | is due to experiences with code bases that aren't good react
         | code.
        
         | tcoff91 wrote:
         | React is better than all of those because of the existence of
         | React Native, React Three Fiber, Remotion, etc...
         | 
         | It has the best ecosystem of libraries and it's not even close.
         | 
         | If you write your web app in Vue and decide you want mobile
         | apps later you won't be able to share much code there.
        
         | ge96 wrote:
         | it's my goto along with NodeJS on the backend and Electron as a
         | wrapper or React Native
         | 
         | NGINX for my server though recently I ran into an out of
         | connections problem that was new on an Azure VM
        
       | jgalt212 wrote:
       | Herein lies the key for IP protection. Never use cloud hosted
       | coding tools as the world will soon be able to copy your homework
       | at zero cost.
        
         | falcor84 wrote:
         | I for one love it that we can copy each other's homework. The
         | open source mindset is what made me fall in love with this
         | industry, and I love the fact that sharing code got easier. If
         | you really want to continue reinventing wheels, go ahead.
        
           | jgalt212 wrote:
           | Yes, open is overall a great thing. But what if I want my
           | work not to be open source, and the LLMs make it so without
           | my consent. As we've seen, this and related matters of fair
           | use, are working their way through the courts.
        
       | tiahura wrote:
       | Perhaps reasoning will help?
        
         | johnecheck wrote:
         | Yes! Reasoning is the answer! It will solve all of our
         | problems! General AI is just around the
         | corner!!!!!!!!!!!!!!!!!!!
        
       | VMG wrote:
       | Guess I figured out my niche as a SWE: have a later knowledge
       | cutoff date than LLMs
        
         | weeniehutjr wrote:
         | Niche: know anything about anything.
        
       | spiderfarmer wrote:
       | >With Claude 3.5 Sonnet, which is generally my AI offering of
       | choice given its superior coding ability, my "What personal
       | preferences should Claude consider in responses?" profile setting
       | includes the line "When writing code, use vanilla HTML/CSS/JS
       | unless otherwise noted by me". Despite this, Claude will
       | frequently opt to generate new code with React, and in some
       | occurrences even rewrite my existing code into React against my
       | intent and without my consultation.
       | 
       | I noticed this too. Anyone found out how to make Claude work
       | better?
        
         | stuffoverflow wrote:
         | Since the system prompt tied to the artifacts feature seems to
         | be the reason for it having a preference for react, seems like
         | the solution would be to use the API instead. Plenty of front
         | ends available nowadays that let you use your own API key. I've
         | been using typingmind since I paid for it over a year ago but
         | I'd be interested to know if some good open source alternatives
         | have popped up more recently.
        
           | gordonhart wrote:
           | The main blocker to using the API with an alternative
           | frontend is the cost.
           | 
           | Daily API usage can easily go above the $20/month
           | subscription cost since output tokens are expensive and each
           | new message reuses the whole message chain as input tokens.
           | Especially true if you often upload images or documents.
        
         | gordonhart wrote:
         | Claude is particularly bad about this, almost makes it unusable
         | for my frontend use cases. I specify the exact tech stack in my
         | prompt and it responds with a solution using whichever packages
         | are available in its environment (Tailwind, shadcn/ui, etc.).
         | 
         | My request to model providers: the strength of your offering is
         | its generality. Please let it be general-purpose and resist
         | adding features (alignment, system prompts, custom stuff like
         | Claude artifacts) that limit this.
        
       | orbital-decay wrote:
       | So... it slows down adoption by providing easier alternatives for
       | beginners? I guess you could look at it that way too.
       | 
       | Eventually it will go either of the two ways, though:
       | 
       | - models will have enough generalization ability to be trained on
       | new stuff that has passed the basic usefulness test in the hands
       | of enthusiasts and shows promise
       | 
       | - models will become smart enough to be useful even for obscure
       | things
        
       | PaulRobinson wrote:
       | I think if you specify a technology in your prompt, any LLM
       | should use that technology in its response. If you don't specify
       | a technology, and that is an important consideration in the
       | answer, it should clarify and ask about technology choices, and
       | if you don't know, it can make a recommendation.
       | 
       | LLMs should not have hard-wired preferences through providers'
       | prompt structure.
       | 
       | And while LLMs are stochastic parrots, and are likely to infer
       | React if a lot of the training corpus mentions React, work should
       | be done to actively prevent biases like this. If we can't get
       | this right with JS frameworks, how are we going to solve it for
       | more nuanced structural biases around ethnicity, gender, religion
       | or political perspective?
       | 
       | What I'm most concerned about here is that Anthropic is taking
       | investment from tech firms who vendor dev tooling - it would not
       | take much for them to "prefer" one of those proprietary
       | toolchains. We might not have much of a problem with React today,
       | but what if your choice of LLM started to determine if you could
       | or couldn't get recommendations on AWS vs Azure vs GCP vs bare
       | metal/roll your own? Or if it suggested only commercial tools
       | instead of F/LOSS?
       | 
       | And to take that to its logical conclusion, if that's happening,
       | how do I know that the history assignment a kid is asking for
       | help with isn't sneaking in an extreme viewpoint - and I don't
       | care if it's extreme left or right, just warped by a political
       | philosophy to be disconnected from truth - that the kid just
       | accepts as truth?
        
         | avbanks wrote:
         | This is actually a very interesting insight, not only do you
         | have to worry about sponsored results but people could game the
         | system by spamming their library/language in a places which
         | will be included in the training set of models. This will also
         | present a significant challenge for security, because I can
         | have a malicious library/package spam it in paths that will be
         | picked up in the training set and have that package be
         | referenced by the LLM.
        
         | BoxFour wrote:
         | The more likely and far more mundane outcome isn't that LLM
         | providers actively tip the scales, but rather that they just
         | entrench existing winners.
         | 
         | As others have pointed out, it's a flywheel: Popular library
         | gains traction - LLMs are trained to produce the "most likely
         | response", which naturally aligns with what's already popular -
         | people stop seeking alternative solutions and instead double
         | down on the existing mainstream choices. (Hypothetically) It's
         | not that OpenAI decides to push AWS, it's just that at the time
         | it was trained AWS was the only real option so it just
         | regurgitates a common view from a point in time.
         | 
         | To extend your analogy, the more realistic scenario isn't that
         | kids slip in extreme view points and take them as ground truth
         | in their history assignments, it's that they don't take a
         | stance on anything at all: Their essays become like CSPAN
         | transcripts, perfectly regurgitating what happened without
         | taking any position or applying any critical thinking one way
         | or the other.
         | 
         | Imagine kids writing about civil rights, but all their
         | reference material was stuck in time at 1953: That's what's
         | more likely to happen.
        
           | wrsh07 wrote:
           | > The more likely and far more mundane outcome isn't that LLM
           | providers actively tip the scales, but rather that they just
           | entrench existing winners
           | 
           | 100%. This isn't that different from the previous status quo
           | (googling how to build a web app will give me guides from
           | digital ocean, vercel, etc about deploying the currently
           | popular technology on their platforms)
           | 
           | As in all things, though, the new technology reinforces this
           | feedback loop faster.
           | 
           | Fwiw, I haven't had any trouble using Claude in cursor to
           | write svelte5 code - there are tools (cursorrules, the svelte
           | 1-pager documentation for LLMs) that you can use to make sure
           | it uses the tech you want. It just requires intention from
           | the prompter and good documentation from the tooling
        
         | throwawaymaths wrote:
         | > I think if you specify a technology in your prompt, any LLM
         | should use that technology in its response. If you don't
         | specify a technology, and that is an important consideration in
         | the answer, it should clarify and ask about technology choices,
         | and if you don't know, it can make a recommendation.
         | 
         | I'm sure we'd all love that but this pipe dream is simply
         | incompatible with the way LLMs work.
         | 
         | orchestration/deployment/agent networks may be able to do that,
         | but that's basically impossible for the LLM itself.
        
       | Eridrus wrote:
       | This will be solved eventually on the AI model side. It isn't
       | some law of nature that it takes a million tokens for an AI to
       | learn something; just the fact that we can prompt these models
       | should convince you of that.
        
         | trescenzi wrote:
         | Maybe, but why would they bother? If 80% of the demand is met
         | by generating really good Python and generating really good X
         | is a lot more work but only 2% of the demand it's likely there
         | isn't going to be a reason to solve that problem well.
        
         | orbital-decay wrote:
         | That's assuming it's a novel problem to deal with, see e.g.
         | C++, JavaScript or every standard in networking ever. The
         | barrier between better tech and worse tech that accidentally
         | made it into production and became legacy cruft has always been
         | high, without any AI.
        
       | avbanks wrote:
       | LLM based AI tools are the new No/Low Code.
        
         | ebiester wrote:
         | It is, but it's an order of magnitude better than the last set
         | of no/low code tools for anyone who has the basics of
         | programming already.
        
       | tajd wrote:
       | Yeah maybe. But I think the thing I like is that is takes me a
       | much shorter amount of time to create solutions for my users and
       | myself. Then I can worry about "tech adoption" once I've achieved
       | a relevant solution to my users.
       | 
       | If performance is an issue then sure let's look at options. But I
       | don't think it's appropriate to expect that sort of level of
       | insight into an optimised solution from llms - but maybe that's
       | just because I've used them a lot.
       | 
       | They're just a function of their training data at the end of the
       | day. If you want to use new technology you might have to generate
       | your own training data as it were.
        
       | jwblackwell wrote:
       | Larger context windows are helping solve this, though.
       | 
       | I use ALpineJS which is not as well known as React etc, but I
       | just added a bunch of examples and instructions to the new cursor
       | project rules, and it's now close to perfect.
       | 
       | Gemini models have up to 2M context windows, meaning you can
       | probably fit your whole codebase and a ton of examples in a
       | single request.
       | 
       | Furthermore, the agenetic way Cursor is now behaving,
       | automatically building up context before taking action, seems to
       | be another way around this problem
        
         | causal wrote:
         | I also suspect reasoning models will start contributing genuine
         | innovations to public repositories in the near future.
        
       | conradfr wrote:
       | I was thinking the other day how coding assistants would hinder
       | new languages adoption.
        
       | killjoywashere wrote:
       | Pathologists as a specialty has been grousing about this for
       | several years, at least since 2021 when the College of American
       | Pathologists established the AI Committee. As a trivial example:
       | any trained model deployed will necessarily be behind any new
       | classification of tumors. This makes it harder to push the
       | science and clinical diagnosis of cancer forward.
       | 
       | The entire music community has been complaining about how old
       | music gets more recommendations on streaming platforms,
       | necessarily making it harder for new music to break out.
       | 
       | It's absolutely fascinating watching software developers come to
       | grips with what they have wrought.
        
         | Schiendelman wrote:
         | The healthcare diagnosis one may be wrong. For existing known
         | diagnoses, (or at least the sliver of diagnoses in this one
         | study), AI can beat doctors - and doctors don't like listening
         | when it challenges them, so it will disrupt them badly as
         | people learn they can provide data from tests directly to AI
         | agents. Sure, this doesn't replace new diagnoses, but the
         | vaaaast majority of failures to diagnose are for existing well
         | classified diagnoses.
         | 
         | https://www.advisory.com/daily-briefing/2024/12/03/ai-diagno...
         | 
         | Edit: yeah, people don't like this.
        
           | killjoywashere wrote:
           | I'm familiar with the linked study, which presents
           | legitimately challenging analytic problems. There's a
           | difference between challenging analytic problems and new
           | analytic problems.
           | 
           | A new platform poses new analytic problems. A new edition of
           | the WHO's classification of skin tumors (1), for example,
           | presents new analytic problems.
           | 
           | (1) https://tumourclassification.iarc.who.int/chapters/64
        
             | Schiendelman wrote:
             | Right, but the vast majority of patient issues today are
             | missing existing diagnoses, not new ones.
        
           | lab14 wrote:
           | I think OP was referring to the case where new illnesses that
           | are not part of the training set are never going to be
           | diagnosed by AI.
        
             | onion2k wrote:
             | It's only a problem if hospitals replace doctors with AI.
             | If they employ AI _as well_ then outcomes will improve.
             | Using AI to find the ones AI can identify means doctors
             | have more time to focus on the ones that AI can 't find.
             | 
             | Of course, that's not what's going to happen. :/
        
               | cbg0 wrote:
               | > Using AI to find the ones AI can identify means doctors
               | have more time to focus on the ones that AI can't find.
               | 
               | That's not how that would work in the real world. In a
               | lot of places a doctor has to put their signature or
               | stamp on a medical document, making them liable for what
               | is on that paper. Just because the AI can do it, that
               | doesn't mean the doctor won't have to double check it,
               | which negates the time saved.
               | 
               | I would wager AI-assisted would be more helpful to reduce
               | things doctors might miss instead of partially or
               | completely replacing them.
        
               | killjoywashere wrote:
               | Interesting. Do you see any versions of the future where
               | use of AI could actually make the physician take _more_
               | time?
        
               | cbg0 wrote:
               | Let's assume you program it so that if it believes with
               | 95% certainty that a patient has a certain condition it
               | will present it to the doctor. While the doctor doesn't
               | agree with it, the whole process between doctor-patient-
               | hospital-insurer might be automated to the point where
               | it's simpler to put the patient through the motions of
               | getting additional checks than the doctor fighting the
               | wrong diagnosis, thus the doctor will have to spend more
               | time to follow up on confirming that this condition is
               | not really present.
               | 
               | I don't have a crystal ball, so this is a made-up
               | scenario.
        
             | Schiendelman wrote:
             | Never is a long time.
             | 
             | Sure, LLMs might not do this anytime soon, but once models
             | understand enough biology, they're going to identify
             | patterns we don't and propose new diagnoses. There's no
             | reason why they wouldn't.
        
               | lab14 wrote:
               | Unfortunately, that's not how LLMs work.
        
           | resource_waste wrote:
           | It has been interesting to see the excuses from doctors, why
           | we need error prone humans instead of higher quality robots.
           | 
           | >Empathy (lol... from doctors?)
           | 
           | >New undetectable cases (lol... AI doesnt have to wait 1 year
           | for an optional continuing education class. I had doctors a
           | few years ago recommending a dangerous expensive surgery over
           | a safer cheaper laser procedure)
           | 
           | >corruptible (lmaooo)
           | 
           | We humans are empathetic to the thought our 'friendly' doctor
           | might be unemployed. However, we shouldn't let that cause
           | negative health outcomes because we were being 'nice'.
        
             | 52-6F-62 wrote:
             | So... we put all of our trust (wait, at that point it might
             | be called faith) into this machine...
             | 
             | If it ever turns on us, begins to malfunction in unforeseen
             | ways, or goes away completely--then what?
             | 
             | Shortsighted, all of it.
        
         | kaonwarb wrote:
         | I doubt it is easier to retrain a large, dispersed group of
         | humans on a new classification of tumors than it is to retrain
         | a model on the same.
        
           | Vegenoid wrote:
           | I think it depends on what you mean by "easier". Dispersing
           | knowledge through people is more intuitive, and tends to
           | happen organically.
        
           | killjoywashere wrote:
           | Not if they're trained to work through the problem each time
           | they encounter it and stay up with their clinical training.
           | The day the new classification drops many have already heard
           | about it.
           | 
           | You also assume that all the models in use will in fact be
           | retrained.
           | 
           | Generally, this position flies in the face of lived
           | experience. AI _is in fact_ stifling adoption of new things
           | across many industries.
        
             | kaonwarb wrote:
             | My position is informed by my own experience; I am not a
             | physician, but have worked closely with a large number of
             | them in a healthcare-oriented career. I've repeatedly noted
             | long-term resistance of many physicians to updating their
             | priors based on robust new evidence.
             | 
             | There are definitely many physicians who do take in the
             | latest developments judiciously. But I find the long tail
             | of default resistance to be very, very long.
        
               | shermantanktop wrote:
               | I was just explaining to a UK colleague about how the
               | American health care system makes getting treatment (and
               | getting it paid for) into a DIY project. And so as a
               | medical shopper, if I'm getting a very standard
               | established treatment I might go for the older
               | experienced doctor, but if it's a new thing I'd opt for
               | someone more recently graduated.
               | 
               | I'm sure the same thing applies worldwide.
        
               | throwbmw wrote:
               | It's not bad actually, considering how many times the new
               | shiny thing has turned out to be quite dangerous a few
               | years later. In a field as high stake as healthcare you
               | want the whole spectrum, from early adopters to die hard
               | skeptics. Especially since we know the reproducibility
               | problem of research, the influence of big pharma and big
               | insurance on healthcare, etc etc
        
             | gibspaulding wrote:
             | > You also assume that all the models in use will in fact
             | be retrained.
             | 
             | And that deploying the retrained models won't require a
             | costly and time consuming recertification process. This is
             | medicine after all.
        
             | dsign wrote:
             | Where I live, specialists can't even speak English, so I
             | doubt very much that they are up to date on anything. And I
             | live in a first-world country.
        
               | jajko wrote:
               | On a first glance this may look like bad comment but
               | there is sound reasoning, I see it with friends who are
               | top notch doctors and surgeons in french part of
               | Switzerland.
               | 
               | Many articles and conferences are in english, often dont
               | get translated (well), and one friend who is urology
               | surgeon specifically mentioned this as an issue in his
               | (former) department in Switzerland's biggest hospital
               | (HUG in Geneva). They simply lag behind bleeding edge a
               | bit.
               | 
               | Can't comment on other languages/cultures, but french
               | speaking folks and english often dont pair well together.
        
               | cbg0 wrote:
               | I'm also in a country where English isn't the first
               | language and for the doctors that do wish to stay up to
               | date on what's going on, there are ways for them to do
               | it, and translation technology is pretty top-notch
               | already.
               | 
               | Aside from time constraints and perhaps no incentive to
               | stay up to date, we do have to remember that some of
               | these new discoveries always take time to find their way
               | into becoming SOTA treatments everywhere, due to costs,
               | regulations needing updates, special training or
               | equipment being required, as well as sometimes only being
               | marginally better than existing treatment options.
        
             | afpx wrote:
             | There's no way that doctors in the US continue their
             | training.
             | 
             | I suffer from a chronic illness. I saw 5 different
             | specialists over the past 2 years, and each one gave me
             | different treatment. A couple even relied on information
             | from the 70s and 80s. One even put me in the ER because he
             | changed my treatment after I explicitly told him not to.
             | 
             | Another example: Back in my 20s, I injured my back. I took
             | my MRI results to six different doctors - and I'm serious
             | here, every one gave me a different diagnosis. In the end,
             | I fixed it myself by doing my own research and treatment (2
             | years of physical therapy). One doctor said I had pain
             | because spine fragments were lodged in my spinal cord (not
             | true). Two of the doctors were even pushing me into
             | invasive surgeries, and I'm so glad I told them no.
             | 
             | I don't understand the praise for doctors. If I had to
             | generalize, I'd guess the majority give up learning after
             | achieving wealth and status. seems like an art and not a
             | science. I will emphatically welcome AI treatment.
        
               | jayd16 wrote:
               | If you don't respect the field in general, why do you
               | think an AI amalgamation of that field to be better?
        
               | afpx wrote:
               | I respect the research and researchers - but medical
               | researchers are far removed from medical practitioners.
        
               | PaulHoule wrote:
               | Continuous improvement is the norm in cancer treatment,
               | not in other areas where diseases are ill-defined and
               | have a huge psychogenic component: back pain is the index
               | case for that, so is TMJ dysfunction. In either case you
               | might go from disabled to 'I have a bad day once a month'
               | with a 20% change in habits and 80% change in attitude.
               | 
               | My dad, who worked in construction, got disabling back
               | pain just in time for the Volcker-Freedman recession [1]
               | His doc wanted to give him surgery which had the risk of
               | being even more disabling, he said he'd go to a
               | chiropractor, his doc said "if you don't do that then
               | don't see me". I remember him taking me along for his
               | chiropractor visits and getting a waterbed (bad idea.) He
               | was on workman's comp at the time but got better around
               | the time the economy got better and work was available
               | again. Not to say he was consciously malingering, but
               | work-associated pain has a lot to do with how you feel
               | about your work.
               | 
               | [1] https://en.wikipedia.org/wiki/Early_1980s_recession
        
           | Workaccount2 wrote:
           | It's not, in my experience doctors are woefully behind the
           | curve on the cutting edge, and even a bit hostile towards it.
        
           | darkerside wrote:
           | Well, the difference is that people eventually die or retired
           | so they are constantly being replaced
        
         | fsndz wrote:
         | I think a one year gap in adoption of new tech is not that bad.
         | Isn't it better to always go for the mature tech first ? The
         | real change will come from the fact that because of AI, compute
         | will be so cheap in the coming years:
         | https://medium.com/thoughts-on-machine-learning/a-future-of-...
        
           | layer8 wrote:
           | This is assuming that new technology will grow the same as in
           | pre-LLM times, and merely be picked up a year late. But use
           | of LLMs is likely to cause new developments to grow and
           | spread slower, because of the reduced visibility. It may take
           | much longer for a new development to gain currency to the
           | extent that it becomes sufficiently visible in the training
           | data. This also slows competition between evolving
           | technologies.
           | 
           | In addition, as the article describes, the LLM services have
           | biases built in to them even among existing technologies. It
           | amplifies existing preferences, leading to less diversity and
           | competition between technologies. Tech leads will have to
           | weigh between the qualities of a technology on its own merits
           | against how well it is supported by an LLM.
        
         | raincole wrote:
         | I know nothing about pathology, but in terms of software, I
         | think slower adoption to new tech is what we need, especially
         | when the "new tech" is just a 5% faster javascript framework.
         | 
         | By the way, for content creation, the only platfrom that really
         | favors new creators is TikTok. Whether it leads to higher
         | content quality is left for one's judgement.
        
           | snek_case wrote:
           | That's not wrong. There is a lot of hype-driven development
           | in the programming world. People are always jumping on the
           | latest web frameworks and such. A little bit more stability
           | is not a bad thing.
           | 
           | That being said, I think that people underestimate how fast
           | LLM technology can evolve. At the moment, lots of training
           | data is needed for LLMs to learn something. This may not
           | always be the case. In 2 to 5 years, it may be possible to
           | train an LLM to be helpful with a new programming language
           | with much less data than is needed today. No reason to assume
           | that the current situation is what things will be like
           | forever. It's not like this technology isn't evolving
           | incredibly fast.
        
             | throwup238 wrote:
             | After watching the entire world's reaction to AI, at this
             | point my conclusion is that hype driven development is
             | human nature, and we just need to come to terms with that
             | (but you will have to drag me kicking and screaming).
        
               | azemetre wrote:
               | Maybe if you think artificially inflating the hype
               | through massive ad campaigns, marketing campaigns, and
               | shoe-horning AI into every product then yeah the world
               | has a reaction to AI. It's mostly been meh, things like
               | Apple Intelligence and Office Copilot have largely fallen
               | flat.
               | 
               | If the hype was real none of these AI initiatives would
               | be struggling to make money, but they are.
               | 
               | I don't really see it different than the artificial web3
               | hype, the only difference being that LLMs are use for
               | extreme happy path scenarios.
        
               | snek_case wrote:
               | The problem is that Apple intelligence is currently kinda
               | useless. They rushed it into production in a misguided
               | effort to "stay relevant". It may take a few years but we
               | should eventually get useful personal assistant type AIs.
               | 
               | I would say LLMs are very useful for specific scenarios.
               | They're also getting better. Just takes time to iron out
               | the kinks.
        
               | staunton wrote:
               | > I don't really see it different than the artificial
               | web3 hype
               | 
               | It's also little different than the .com bubble...
               | 
               | I think this teaches us that a thing can be hyped into
               | the stratosphere, suck very much, crash and burn, and
               | then go on to eat the world...
        
           | ragnese wrote:
           | > I know nothing about pathology, but in terms of software, I
           | think slower adoption to new tech is what we need, especially
           | when the "new tech" is just a 5% faster javascript framework.
           | 
           | I hope that's not the definition people are using when
           | discussing adoption of "new tech".
           | 
           | When it comes to the topic of AI and "new tech adoption", I
           | think about something like the Rust programming language.
           | 
           | I apologize if it chafes the people reading this comment that
           | I'm something of a Rust evangelist and I'm working from a
           | point of view that Rust's very existence is a (large) net-
           | positive when it comes to programming and how we think about
           | programming language design.
           | 
           | My fear with AI tools in their current state is that it will
           | slow down innovation in programming languages. Rust gained
           | popularity because it brought things to the table that made
           | writing safe, performant, and correct (thinking about the
           | strong, expressive, static type checking) software much
           | easier than it had been with the old incumbents (in certain
           | domains).
           | 
           | But, if Rust were released today or in the near future, would
           | it take off? If we could, hypothetically, get to a point
           | where an AI tool could spit out C or C++ code and push it
           | through some memory sanitzers, Valgrind, etc and just iterate
           | with itself until it was very likely to be free of memory
           | safety bugs, why would we need a new language to fix those
           | things? I guess we wouldn't. And it wouldn't really matter if
           | the code that gets generated is totally inscrutable. But, it
           | saddens me to think that we might be nearing the end of
           | human-readable programming language research and design.
        
             | acomjean wrote:
             | It will be harder for new languages and frameworks. The AI
             | Its exasperates the bootstrapping problem.
             | 
             | An interesting example is perl which is essentially static
             | at this point (perl 6 was renamed and never got traction).
             | 
             | I know from experience running pipelines that those old
             | perl scripts almost always work, where if I come across an
             | old python script (2x) I will have to go in and make some
             | adjustments. Maybe a library has changed too...
             | 
             | People like new shinny things though. Maybe the new
             | languages will try to train the ai and release there own
             | models, but that's a huge lift.
        
               | PaulHoule wrote:
               | Might be easier than you think. If DeepSeek can train a
               | model cheaply, so you can you. Probably more cheaply as
               | the technology and models get better.
               | 
               | People used to be worried that AI performance was going
               | to degenerate if models are trained on AI slop, but it's
               | been found that synthetic data is the bee's knees for
               | coding, reasoning and such, so it may well be that a new
               | language comes with a large amount of synthetic examples
               | which will not just be good for AI training but also for
               | documentation, testing and all that.
        
             | ANewFormation wrote:
             | All you're talking about there in the end would be another
             | compilation step.
             | 
             | I'm highly bearish on the concept of anything like that
             | ever being possible (and near perfectly reliable) with
             | llms, but if it were then it'd make sense as just another
             | processing phase in compilation.
        
             | PaulHoule wrote:
             | I'm also going to argue that Rust is a less AI-friendly
             | language than, say, Go.
             | 
             | GC languages have many benefits that come from 'you don't
             | have to think about memory allocation'. For instance you
             | can just smack an arbitrary library into a Java program
             | with maven and not think at all about whether the library
             | or the application is responsible for freeing an object.
             | The global problem of memory allocation is handled by a
             | globally scoped garbage collector.
             | 
             | LLMs are great at superficial/local translation processes
             | (e.g. medium-quality translation of Zhong Wen  to English
             | doesn't require constraint solving any more than
             | remembering which of the many indexing schemes is the right
             | one for 'how do I look up this row/column/whatever in
             | pandas') But fighting with the borrow checker (getting
             | global invariants right) is entirely outside the realm of
             | LLM competence.
        
           | hartator wrote:
           | a 5% _slower_ javascript framework
        
             | ben_w wrote:
             | Surely 50% slower, compounding each year?
             | 
             | Jokes aside, I find it curious what does and doesn't gain
             | traction in tech. The slowness of the IPv6 was already an
             | embarrassment when I learned about it in university... 21
             | years ago, and therefore before the people _currently_
             | learning about it in university had been conceived.
             | 
             | What actually took hold? A growing pantheon of software
             | architecture styles and patterns, and enough layers of
             | abstraction to make jokes about Java class names from 2011
             | (and earlier) seem tame in comparison:
             | https://news.ycombinator.com/item?id=3215736
             | 
             | The way all of us seem to approach code, the certainty of
             | what the best way to write it looks like, the degree to
             | which a lone developer can still build fantastic products
             | and keep up with an entire team... we're less like
             | engineers, more like poets arguing over a preferred form
             | and structure of the words, of which metaphors and simile
             | work best -- and all the while, the audience is asking us
             | to rhyme "orange" or "purple"
        
               | saulpw wrote:
               | The slowness to adopt IPv6 is because it's not a great
               | design.
               | 
               | Going from 32-bits to 128-bits is complete
               | overengineering. We will _never_ need 128-bits of network
               | address space as long as we are confined to this solar
               | system, and the resulting addresses are extremely
               | cumbersome to use. (Can you tell someone your IPv6
               | address over the phone? Can you see it on one monitor and
               | type it into a different computer? Can you remember it
               | for the 10 seconds it takes to walk over to the other
               | terminal?)
               | 
               | 48-bit addresses would have been sufficient, and at worst
               | they could have gone with 64-bit addresses. This is
               | already too cumbersome (9-12 base36 digits), but maybe
               | with area-code like segmentation it could be rotated into
               | manageable. 128-bits is just not workable.
        
           | uludag wrote:
           | Maybe I'm overlooking something but just looking at the past
           | decade or so, a lot of new technologies and practices have
           | been adopted. I assume most people would call these changes
           | _progress_. So with this in mind, if in 10 years, we 're by
           | and large using the same technologies with AI injected in it,
           | I feel that we would be missing something, as this article
           | points out.
           | 
           | It's kind of sad to think that there may never be new
           | technologies like Rust that break out and gain a critical
           | traction. I'm hoping I'm wrong.
        
           | ahartmetz wrote:
           | >5% faster javascript framework
           | 
           | Now you are bullshitting
        
             | frankharv wrote:
             | You are correct he meant 1% faster
        
               | rebolek wrote:
               | 1% faster and 100% more complicated
        
               | ahartmetz wrote:
               | More like 30% slower, 10% easier for trivial stuff and
               | 70% more complicated for nontrivial stuff
        
           | gessha wrote:
           | I guess it makes sense to differentiate technological areas
           | where we want progress at "any possible pace" vs "wait and
           | see pace". I don't know if pathologists or other medical
           | professionals feel the same about their field.
           | 
           | On a related note, are there any techniques for facilitating
           | tech adoption and bringing all users up to speed?
        
           | martinsnow wrote:
           | Web development isn't your thing either i see ;)
        
           | fennecbutt wrote:
           | Tiktok does not favour new creators, its users do. And only
           | because it's a new generation of consumer for the most part,
           | who want to consume content from their chosen platform. The
           | same thing will happen with alpha.
        
             | meroes wrote:
             | Netflix and YouTube push new content/favor new creators or
             | directors.
             | 
             | They don't push the classics like music platforms do, and I
             | don't think it's just streamers' tastes.
        
         | resource_waste wrote:
         | >Pathologists as a specialty has been grousing about this for
         | several years, at least since 2021 when the College of American
         | Pathologists established the AI Committee.
         | 
         | This sounds like Moral Coating for what is otherwise protection
         | of the Status Quo.
         | 
         | High paid doctors do not want to be replaced by AI. They will
         | use every excuse to keep their high paying job.
        
         | diggan wrote:
         | > The entire music community has been complaining about how old
         | music gets more recommendations on streaming platforms,
         | necessarily making it harder for new music to break out.
         | 
         | Compared to what though? Compared to Limeware/Kaazaa back in
         | the day, or compared to buying records in a store?
         | 
         | Personally, I find it easier than ever to find brand new music,
         | mostly because Spotify still surfaces new things with ease for
         | me (and always have been, since I started using it in 2008),
         | and platforms like Bandcamp makes it trivial to find new
         | artists that basically started uploading music yesterday.
        
           | gosub100 wrote:
           | Or compared to the days of radio, having labels decide what's
           | on the mainstream and the indie college stations doing the
           | unpaid work (and giving listeners the gift) of discovering
           | "lost" hits.
        
           | Blackthorn wrote:
           | Compared to Myspace. The difference for anyone who lived
           | through it is night and day.
        
           | serviceberry wrote:
           | > Compared to what though? Compared to Limeware/Kaazaa back
           | in the day, or compared to buying records in a store?
           | 
           | Compared to curation by other humans. Be it music labels,
           | magazines, radio DJs, or a person sharing their playlist or
           | giving you a mixtape.
           | 
           | In this model, tastes never overlap perfectly, so you're
           | exposed to unfamiliar music fairly regularly, often in some
           | emotional context that makes you more likely to accept
           | something new.
           | 
           | Algorithms don't really do that. They could, but no one is
           | designing them that way. If I listen predominantly to female
           | vocalists on Spotify for a week, I'm only getting female
           | vocalists from now on.
        
             | esafak wrote:
             | I don't get that. Its recommender has been great for me.
             | And there are lots of playlist if I want to try something
             | completely different.
        
         | golergka wrote:
         | > The entire music community has been complaining about how old
         | music gets more recommendations on streaming platforms,
         | necessarily making it harder for new music to break out.
         | 
         | I can understand other issues, but this has nothing to do with
         | that. Models don't have to be re-trained to recommend new
         | music. That's not how recommendation systems work.
        
         | hindsightbias wrote:
         | > new music
         | 
         | I keep thinking I'm going crazy until Rick Beato explains that
         | yes, I am just an RNN Meat Popsicle and the world is
         | interpolated:
         | 
         | https://www.youtube.com/watch?v=j_9Larw-hJM
        
         | idunnoman1222 wrote:
         | This is the fault of the regulators. There's no reason that new
         | discoveries are not put in a queue to train a new AI and when
         | there are enough to make it worth the run, you do the run and
         | then you give the doctors old model and new model and they run
         | both and compare the results.
        
         | meroes wrote:
         | > The entire music community has been complaining about how old
         | music gets more recommendations on streaming platforms,
         | necessarily making it harder for new music to break out.
         | 
         | Why does music continually entrench the older stuff (will we
         | ever stop playing classic rock bands) whereas video streaming
         | platforms like Netflix and YouTube try to hide/get rid of the
         | old stuff?
        
           | adovenmuehle wrote:
           | I wonder if shows like The Office, Parks and Rec, Seinfeld,
           | etc end up becoming the "classic rock" of streaming.
        
           | AlienRobot wrote:
           | The main issue with AI, and ironically the reason why ChatGPT
           | is the best one, is whom it works for.
           | 
           | AI doesn't work for the user. It couldn't care less if the
           | user is happy or not. AI is designed first and foremost to
           | make more money for the company. Its metrics are increased
           | engagement and time on site, more sales, sales with better
           | margins. Consequently, the user often has no choice or
           | control over what the AI recommends for them. The AI is
           | recommending what makes more sense for the company, so the
           | user input is unnecessary.
           | 
           | Think of AI not as your assistant, but as a salesman.
           | 
           | One interesting consequence of this situation I found was
           | that Youtube published a video "explaining" to creators why
           | their videos don't have reach in the algorithm, where they
           | essentially said a bunch of nothing. They throw some data at
           | the AI, and the AI figures it out. Most importantly, they
           | disclosed that one of the key metrics driving their algorithm
           | is "happiness" or "satisfaction" partially gathered through
           | surveys, which (although they didn't explicitly say this)
           | isn't a metric that they provide creators with, thus it's
           | possible for Youtube to optimize for this metric, but not for
           | creators to optimize for it. That's because the AI works for
           | Youtube. It doesn't work for creators, just as it doesn't
           | work for users.
           | 
           | People are complex creatures, so any attempt at guessing what
           | someone wants at a specific time without any input from them
           | seems just flawed at a conceptual level. If Youtube wanted to
           | help users, they would just fix their search, or incorporate
           | AI in the search box. That's a place where LLMs could work, I
           | think.
           | 
           | When you look at things this way, the reason why
           | Netflix/Youtube get rid of old stuff has nothing to do with
           | users, but with some business strategy that they have that
           | differs from the music industry.
        
       | delichon wrote:
       | Working in Zed I'm full of joy when I see how well Claude can
       | help me code. But when I ask Claude about how to use Zed it's
       | worse than useless, because it's training data is old compared to
       | Zed, and it freely hallucinates answers. So for that I switch
       | over to Perplexity calling OpenAI and get far better answers. I
       | don't know if it's more recent training or RAG, but OpenAI knows
       | about recent Zed github issues where Claude doesn't.
       | 
       | As long as the AI is pulling in the most recent changes it
       | wouldn't seem to be stiflling.
        
         | soared wrote:
         | I tried uses chatgpt 4o to write a simple website that used the
         | chagpt api. It always generated code for their deprecated API.
         | I'd paste the error about using old calls, and it would
         | recognize its error and.. generate old calls again.
         | 
         | Couldn't ever use it owns api.
        
       | chrisco255 wrote:
       | This makes me fear less for web development jobs being lost to
       | AI, to be honest. Look, we can create new frameworks faster than
       | they can train new models. If we all agree to churn as much as
       | possible the AIs will never be able to keep up.
        
       | anarticle wrote:
       | Sadly, as a person who used write AVX in C for real time imaging
       | systems: don't care shipped.
       | 
       | I love dingling around with Cursor/Claude/qwen to get a 300 line
       | prototype going in about 3-5 minutes with a framework I don't
       | know. It's an amazing time to be small, I would hate to be
       | working at a megacorp where you have to wait two months to get
       | approval to use only GitHub copilot (terrible), in a time of so
       | many interesting tools and more powerful models every month.
       | 
       | For new people, you still have to put the work in and learn if
       | you want to transcend. That's always been there in this industry
       | and I say that as a 20y vet, C, perl, java, rails, python, R, all
       | the bash bits, every part matters just keep at it.
       | 
       | I feel like a lot of this is the js frontend committee running
       | headlong into their first sea change in the industry.
        
       | mtkd wrote:
       | Sonnet + Tailwind is something of a force multiplier though --
       | backend engineers now have a fast/reliable way of making frontend
       | changes that are understandable and without relying on someone
       | else -- you can even give 4o a whiteboard drawing of a layout and
       | get the tailwind back in seconds
       | 
       | On the wider points, I do think it is reducing time coders are
       | thinking about strategic situation as they're too busy advancing
       | smaller tactical areas which AI is great at assisting -- and
       | agree there is a recency issue looming, once these models have
       | heavy weightings baked in, how does new knowledge get to the
       | front quickly -- where is that new knowledge now people don't use
       | Stackoverflow?
       | 
       | Maybe Grok becomes important purely because it has access to
       | developers and researchers talking in realtime even if they are
       | not posting code there
       | 
       | I worry the speed that this is happening results in younger
       | developers not spending weeks or months thinking about something
       | -- so they get some kind of code ADHD and never develop the
       | skills to take on the big picture stuff later which could be
       | quite a way off AI taking on
        
         | the__alchemist wrote:
         | > backend engineers now have a fast/reliable way of making
         | frontend changes that are understandable and without relying on
         | someone else
         | 
         |  _backend engineers_ in this context could learn JS.
        
       | moyix wrote:
       | One thing that is interesting is that this was anticipated by the
       | OpenAI Codex paper (which led to GitHub Copilot) all the way back
       | in 2021:
       | 
       | > Users might be more inclined to accept the Codex answer under
       | the assumption that the package it suggests is the one with which
       | Codex will be more helpful. As a result, certain players might
       | become more entrenched in the package market and Codex might not
       | be aware of new packages developed after the training data was
       | originally gathered. Further, for already existing packages, the
       | model may make suggestions for deprecated methods. This could
       | increase open-source developers' incentive to maintain backward
       | compatibility, which could pose challenges given that open-source
       | projects are often under-resourced (Eghbal, 2020; Trinkenreich et
       | al., 2021).
       | 
       | https://arxiv.org/pdf/2107.03374 (Appendix H.4)
        
         | MattGaiser wrote:
         | ChatGPT and Gemini default to create-react-app, which has been
         | considered poor practice for 2 years at least.
        
           | normie3000 wrote:
           | What's considered better practice? NextJS?!
        
             | tylerjaywood wrote:
             | vite
        
             | ZeWaka wrote:
             | Sure, or something like create-vite. Dealers choice.
        
             | TheRealPomax wrote:
             | Literally yes? https://react.dev/learn/creating-a-react-app
             | lists several options because different people will click
             | with different solutions. Or find one of the many other
             | ones yourself of course, they're not hard to find, but
             | cutting your teeth on an official recommendation before
             | moving on to greener pastures is always a good idea.
             | 
             | The first one is always to learn from and then throw away.
        
           | ZeWaka wrote:
           | It's also officially dead now (finally, lol).
        
       | hiAndrewQuinn wrote:
       | >Consider a developer working with a cutting-edge JavaScript
       | framework released just months ago. When they turn to AI coding
       | assistants for help, they find these tools unable to provide
       | meaningful guidance because their training data predates the
       | framework's release. [... This] incentivises them to use
       | something [older].
       | 
       | That sounds great to me, actually. A world where e.g. Django and
       | React are considered as obvious choices for backend and frontend
       | as git is for version control sounds like a world where high
       | quality web apps become much cheaper to build.
        
         | matsemann wrote:
         | What if it happened just before React, and you therefore got
         | stuck with angular? Should we now be stuck with React forever
         | just because it's okay-ish, never allowing future better
         | framework to emerge?
        
           | hiAndrewQuinn wrote:
           | >What if it happened just before React, and you therefore got
           | stuck with angular?
           | 
           | Still a good thing. :) The massive bump in developer market
           | liquidity is far more valuable in my eyes than any inherent
           | DevEx advantages. You'd still have much cheaper high quality
           | web apps, although, if React truly has a technical advantage
           | over Angular (doubtful), maybe not _as_ much cheaper, but
           | still much cheaper than pre-LLM.
           | 
           | If you truly want to figure out where I think the equation
           | sign flips, it's probably like, pre-Smalltalk somewhere.
        
           | tobyhinloopen wrote:
           | Or we ended up writing jqueryscript forever!
           | $(function() {           // yay         })
        
         | klysm wrote:
         | I'm all for boring technologies but can we please at least use
         | compiled languages with types
        
           | lasagnagram wrote:
           | Nobody's stopping you?
        
         | munificent wrote:
         | _> A world where e.g. Django and React are considered as
         | obvious choices for backend and frontend as git is for version
         | control sounds like a world where high quality web apps become
         | much cheaper to build._
         | 
         | Imagine you saying this twenty years ago. Would you still want
         | to be writing your back-end in JavaBeans, your front end in
         | VisualBASIC, and storing your data in Subversion?
        
           | AlexandrB wrote:
           | VisualBASIC made much nicer (and more responsive) UIs than
           | Electron, so this isn't the slam dunk you think it is.
        
             | esafak wrote:
             | We dunk on Electron too, don't worry.
        
         | lasagnagram wrote:
         | Has that developer considered reading the documentation? Or
         | maybe not using this week's newest JS framework with three
         | GitHub stars and a flashy marketing page? God, what a
         | depressing future. "Tech experts are useless unless the
         | plagiarizing mechanical Turk has all the answers for them." The
         | only jobs AI will eliminate are the people who pretend to know
         | anything about software while actually being entirely useless.
         | 
         | Learn. Your. Fucking. Craft.
        
       | jleask wrote:
       | The underlying tech choice only matters at the moment because as
       | software developers we are used to that choice being important.
       | We see it as important because _we_ currently are the ones that
       | have to use it.
       | 
       | As more and more software is generated and the prompt becomes how
       | we define software rather than code i.e. we shift up an
       | abstraction level, how it is implemented will become less and
       | less interesting to people. In the same way that product owners
       | now do not care about technology, they just want a working
       | solution that meets their requirements. Similarly I don't care
       | how the assembly language produced by a compiler looks most of
       | the time.
        
       | booleandilemma wrote:
       | Seems like a short-term problem. We're going to get to the point
       | (maybe we're already there?) where we'll be able to point an AI
       | at a codebase and say "refactor that codebase to use the latest
       | language features" and it'll be done instantly. Sure, there might
       | be a lag of a few months or a year, but who cares?
        
       | at_ wrote:
       | Anecdotally, working on an old Vue 2 app I found Claude would
       | almost always return "refactors" as React + Tailwind the first
       | time, and need nudging back into using Vue 2.
        
         | AznHisoka wrote:
         | I have a similar experience when I tell ChatgPT I need Ruby 2.x
         | code. It always gives me a Ruby 3 version even after I tell it
         | I need code that works with version 2. I need to scream and
         | curse at it before it fixes it so it works
        
       | pmuk wrote:
       | I have noticed this. I think it also applies to the popularity of
       | the projects in general and the number of training examples it
       | has seen.
       | 
       | I was testing Github copilot's new "Agent" feature last weekend
       | and rapidly built a working app with Vue.js + Vite +
       | InstantSearch + Typesense + Tailwind CSS + DaisyUI
       | 
       | Today I tried to build another app with Rust and Dioxus and it
       | could barely get the dev environment to load, kept getting stuck
       | on circular errors.
        
       | evanjrowley wrote:
       | Neovim author TJ DeVries Express similar concerns in a video
       | earlier this year:
       | https://youtu.be/pmtuMJDjh5A?si=PfpIDcnjuLI1BB0L
        
       | benrutter wrote:
       | I think annecdotally this is true, I've definitely seen worse,
       | but older technologies be chosen on the basis of LLM's knowing
       | more about them.
       | 
       | That said, I also think it's a _bad choice_ , and here's some
       | good news on that front- you can make good choices which will put
       | you and your project/company ahead of many projects/companies
       | making bad choices!
       | 
       | I don't think the issue is that specific to LLMs- people have
       | been choosing React and similar technologies "because it's easy
       | to find developers" for ages.
       | 
       | It's definitely a shame to see people make poor design decisions
       | for new reasons, but I think poor design decisions for dumb
       | reasons are gonna outlive LLMs by some way.
        
       | trescenzi wrote:
       | Generative AI is fundamentally a tool that enables acceleration.
       | Everything mentioned in this already true without Gen AI. Docs of
       | new versions aren't as easy to find till they aren't as new. This
       | is even true for things in the zeitgeist. Anyone around for the
       | Python 2 to 3 or React class to hooks transitions knows how
       | annoying that can be.
       | 
       | Yes new programmers will land on Python and React for most
       | things. But they already do. And Gen AI will do what it does best
       | and accelerate. It remains to be seen what'll come of that trend
       | acceleration.
        
       | dataviz1000 wrote:
       | I'm on the fence with this. I've been using Copilot with vscode
       | constantly and it has greatly increased my productivity. Most
       | important it helps me maintain momentum without getting stuck.
       | Ten years ago I would face a problem with no solution, write a
       | detailed question on Stack Exchange, and most likely solve it in
       | a day or two with a lot of tinkering. Today I ask Claude. If it
       | doesn't give me a good answer, I can get the information I need
       | to solve the problem.
       | 
       | I've been thinking a lot of T.S. Eliot lately. He wrote and
       | essay, "Tradition and the Individual Talent," which I think is
       | pertinent to this issue. [0] (I should reread it.)
       | 
       | [0] https://www.poetryfoundation.org/articles/69400/tradition-
       | an...
        
       | lherron wrote:
       | I don't know how you solve the "training data and tooling prompts
       | bias LLM responses towards old frameworks" part of this, but once
       | a new (post-cutoff) framework has been surfaced, LLMs seem quite
       | capable of adapting using in-context learning.
       | 
       | New framework developers need to make sure their documentation is
       | adequate for a model to use it when the docs are injected into
       | the context.
        
       | memhole wrote:
       | I've wondered this myself. There was a post about gumroad a few
       | months ago where the CEO explained the decision to migrate to
       | typescript and react. The decision was in part because of how
       | well AI generated those, iirc.
        
       | lackoftactics wrote:
       | > OpenAI's latest models have cutoffs of late 2023.
       | 
       | The first paragraph is factually incorrect; the cutoff is June
       | 2024 for 4o.
       | 
       | Awww, no more new JavaScript frameworks and waiting only for
       | established technologies to cut through the noise. I don't see
       | that as a bad thing. Technologies need to mature, and maintaining
       | API backward compatibility is another advantage.
        
         | mtkd wrote:
         | The problem is acute with APIs that move fast and deprecate
         | (Shopify and some of the Google ones)
        
         | smashed wrote:
         | It does not really matter as even though the models get
         | updated, the new data was produced with the help of the older
         | models, it is feeding on itself.
         | 
         | Just imagine how hard it would be to push a new programming
         | language. No AI models would be able to generate code in that
         | new language, or they would be extremely limited. This would
         | make adoption much more difficult in a world where all
         | developers use AI tooling extensively.
         | 
         | I believe this trend could create new opportunities also: as
         | everyone uses AI tools to generate statistically average
         | quality code, only those not using AI tools will be able to
         | create true innovation.
        
           | Workaccount2 wrote:
           | In some sense I am hopeful that AI will be able to just write
           | everything directly in binary. Everything written ideally,
           | with no abstraction, fluff or bumpers for human brains.
           | Computers don't talk in any high level programming language,
           | they talk in binary. If anything we should probably be
           | focusing LLMs on getting good at that.
           | 
           | I can only imagine that the amount of energy wasted on CPU
           | cycles from layers of bloated programming languages makes
           | stuff like bitcoin mining look like a rounding error.
        
             | atypeoferror wrote:
             | Not sure that's always a good thing - see the occasionally
             | erratic behavior of Tesla's autopilot. It directly speaks
             | the language of the systems it connects with, and also
             | occasionally steers into dividers, killing people - and
             | nobody knows why. We need to be able to verify correctness
             | of what the models generate.
        
         | chrismarlow9 wrote:
         | I can't wait to see how these AI models maintain backward
         | compatibility
        
         | OuterVale wrote:
         | Author here. May I request a source for that?
         | 
         | Platform docs state:
         | 
         | > The knowledge cutoff for GPT-4o models is October, 2023.
         | 
         | https://platform.openai.com/docs/models#gpt-4o
        
           | dankebitte wrote:
           | > Updates to GPT-4o in ChatGPT (January 29, 2025)
           | 
           | > By extending its training data cutoff from November 2023 to
           | June 2024 [...]
           | 
           | https://help.openai.com/en/articles/9624314-model-release-
           | no...
        
             | OuterVale wrote:
             | Thank you. I'll make a correction next I can.
             | 
             | I do wonder why this information is lacking from the
             | platform docs though. They specifically mention a model
             | that is the "GPT-4o used in ChatGPT".
        
               | rs186 wrote:
               | It seems a bug to me. If content in the doc does not
               | match the actual behavior, either the doc is outdated, or
               | the software has a bug. Which is a bug either way.
        
         | Secretmapper wrote:
         | > Awww, no more new JavaScript frameworks and waiting only for
         | established technologies to cut through the noise. I don't see
         | that as a bad thing. Technologies need to mature, and
         | maintaining API backward compatibility is another advantage.
         | 
         | I think this kind of discussion is immature and downplays the
         | point of the article.
         | 
         | A good example of this that I just encountered: Rust. Just
         | asked Claude/ChatGPT for rust stuff recently, and it still
         | gives a lot old/depreciated methods for a lot of things. This
         | has been the case for Godot 3 vs 4 as well.
        
           | wrs wrote:
           | Same here. In my TypeScript code, Cursor/Claude seem quite
           | fluent. In my Rust code, I often turn Cursor Tab off because
           | it's just suggesting nonsense.
        
         | esafak wrote:
         | What if you are using boring technology, but surprise, it or
         | some of its libraries got updated? React is on version 19. Show
         | some imagination.
        
       | matsemann wrote:
       | I actually asked this a while back, but got little response:
       | https://news.ycombinator.com/item?id=40263033
       | 
       | > Ask HN: Will LLMs hurt adoption of new frameworks and
       | technology?
       | 
       | > If I ask some LLM/GPT a react question I get good responses. If
       | I ask it about a framework released after the training data was
       | obtained, it will either not know or hallucinate. Or if it's a
       | lesser known framework the quality will be worse than for a known
       | framework. Same with other things like hardware manuals not being
       | trained on yet etc.
       | 
       | > As more and more devs rely on AI tools in their work flows,
       | will emerging tech have a bigger hurdle than before to be
       | adopted? Will we regress to the mean?
        
         | spamizbad wrote:
         | It seems self-evident it will, and it's largely self-
         | reinforcing.
         | 
         | Less documentation/examples of new tech -> New model doesn't
         | have enough info on new tech to be useful -> Less uptake of new
         | technology -> Less documentation/examples to build a corpus....
         | 
         | I do wonder if this problem could get solved by basically
         | providing documentation explicitly written for LLMs to consume
         | and produce more detailed "synthetic" documentation/examples
         | from. No idea if that's possible or even wise, but probably a
         | problem space worth exploring. Or if these LLMs develop some
         | sort of standardized way to rapidly apply new bodies of work
         | that avoids costly retraining - like kernel modules, but for
         | LLMs.
        
           | cbg0 wrote:
           | Since the current chatbots have the ability to tap into
           | Google Search, it's not unlikely they could gather their own
           | up to date documentation on-the-fly. This would create a slew
           | of new attack vectors where malicious actors will try to add
           | backdoors into the documentation, which the LLM would
           | reproduce.
           | 
           | A seasoned software engineer will easily pick it up, but the
           | large amount of folks that are just copy pasting chatbot
           | output to make their own apps will certainly miss it.
        
           | logifail wrote:
           | > Less documentation/examples of new tech -> New model
           | doesn't have enough info on new tech to be useful -> Less
           | uptake of new technology -> Less documentation/examples to
           | build a corpus....
           | 
           | Q: Could it be that those who aren't relying on ChatGPT (or
           | similar) might have a significant competitive advantage?
        
             | kdmtctl wrote:
             | Just a head start. The LLM retraining cycle could probably
             | become shorter and shorter over time.
        
         | efavdb wrote:
         | Supporting anecdata: I was interested to see that chatgpt
         | doesn't know how to use one of my (small, not too popular) open
         | source python packages -- despite having blog posts and
         | documentation on them, all from more than five years back.
        
         | curious_cat_163 wrote:
         | I like what FastHTML folks did by preparing what amounts to
         | instructions for LMs. From their Github [1]:
         | 
         | > This example is in a format based on recommendations from
         | Anthropic for use with Claude Projects. This works so well that
         | we've actually found that Claude can provide even better
         | information than our own documentation!
         | 
         | This is the file: https://docs.fastht.ml/llms-ctx.txt
         | 
         | [1] https://github.com/AnswerDotAI/fasthtml
        
           | bckmn wrote:
           | This example is based on this proposed standard (making to
           | robots.txt or security.txt): [The /llms.txt file - llms-
           | txt](https://llmstxt.org/)
        
           | authorfly wrote:
           | I do not know how FastHTML works with AI, but for the recent
           | Svelte 5, which has a similar llms.txt file, it's clear to me
           | that actual usage patterns are required over explanatory
           | llms.txt content with instructions. In Svelte 5 usage, it's
           | consistent at certain things which the docs spell out for
           | version 5 and would not be in the training data (runes) but
           | not in their placement, reactivity, or usage changes implied
           | by the system (e.g. changes of information flow outside of
           | props/states)
           | 
           | It seems similar to a cheat sheet which has formulas, vs a
           | cheat sheet with worked through sample problems that align
           | perfectly with the test.
           | 
           | The latter exists for historic modules - the former is the
           | best you can do for recent libraries/versions.
           | 
           | I am not sure where React stands as I know they have changed
           | their reactivity model and introduced patterns which reduce
           | boilerplate code. Can anyone comment on the latest react
           | version used with AI?
        
       | photochemsyn wrote:
       | The central issue is high cost of training the models, it seems:
       | 
       | > "Once it has finally released, it usually remains stagnant in
       | terms of having its knowledge updated. This creates an AI
       | knowledge gap. A period between the present and AI's training
       | cutoff... The cutoff means that models are strictly limited in
       | knowledge up to a certain point. For instance, Anthropic's latest
       | models have a cutoff of April 2024, and OpenAI's latest models
       | have cutoffs of late 2023."
       | 
       | Hasn't DeepSeek's novel training methodology changed all that? If
       | the energy and financial cost for training a model really has
       | drastically dropped, then frequent retraining including new data
       | should become the norm.
        
         | lrae wrote:
         | > Hasn't DeepSeek's novel training methodology changed all
         | that? If the energy and financial cost for training a model
         | really has drastically dropped, then frequent retraining
         | including new data should become the norm.
         | 
         | Even if training gets way cheaper or even if it stays as
         | expensive but more money gets thrown at it, you'll still run
         | into the issue of having no/less data to train on?
        
           | photochemsyn wrote:
           | True. One effective test for AGI might be the ability to
           | first create a new language, then also write performant code
           | in that language.
        
       | armchairhacker wrote:
       | AI may be exaggerating this issue, but it's always existed.
       | 
       | New tech has an inherent disadvantage vs legacy tech, because
       | there's more built-up knowledge. If you choose React, you have
       | better online resources (official docs, tutorials, answers to
       | common pitfalls), more trust (it won't ship bugs or be
       | abandoned), great third-party helper libraries, built-in IDE
       | integration, and a large pool of employees with experience. If
       | you choose some niche frontend framework, you have none of those.
       | 
       | Also, popular frameworks usually have better code, because they
       | have years of bug-fixes from being tested on many production
       | servers, and the API has been tailored from real-world
       | experience.
       | 
       | In fact, I think the impact of AI generating better outputs for
       | React is _far less_ than that of the above. AI still works on
       | novel programming languages and libraries, just at worse quality,
       | whereas IDE integrations, helper libraries, online resources,
       | etc. are useless (unless the novel language /library bridges to
       | the popular one). And many people today still write code with
       | zero AI, but nobody writes code without the internet.
        
         | kerblang wrote:
         | I've been looking all through here for someone to finally make
         | this most obvious point.
         | 
         | Even for those of us who use mostly stack overflow/google, it's
         | much cheaper to wait on someone else to run into your problem
         | and document the solution than to be first into the fire. We've
         | relied on this strategy for a couple of decades now.
         | 
         | I don't think the OP has demonstrated that adoption rates for
         | new tech have _changed_ in any way since AI.
         | 
         | > Also, popular frameworks usually have better code, because
         | they have years of bug-fixes from being tested on many
         | production servers, and the API has been tailored from real-
         | world experience.
         | 
         | Overall I am very resistant to the idea that popular==good. I'd
         | say popular==more popular. Also I think there's often a point
         | where feeping creaturism results in tools that are
         | overcomplicated, prone to security bugs and no longer easy to
         | use.
        
       | thecleaner wrote:
       | Shove the docs as context. Gemini has 2m context length.
        
       | highfrequency wrote:
       | I have definitely noticed that ChatGPT is atrocious at writing
       | Polars code (which was written recently and has a changing API)
       | while being good at Pandas. I figure this will mostly resolve
       | when the standard reasoning models incorporate web search through
       | API documentation + trial and error code compilation into their
       | chain of thought.
        
       | d_watt wrote:
       | It's always been a thing with modes of encapsulating knowledge.
       | The printing press caused the freezing of language, sometimes in
       | a weird place*
       | 
       | Where great documentation was make or break for a open source
       | project for the last 10 years, I think creating new projects with
       | AI in mind will be required in the future. Maybe that means
       | creating a large amount of examples, maybe it means providing
       | fine tunes, maybe it means publishing a MCP server.
       | 
       | Maybe sad because it's another barrier to overcome, but the fact
       | that AI coding is so powerful so quickly probably means it's
       | worth the tradeoff, at least for now.
       | 
       | *https://www.dictionary.com/e/printing-press-frozen-spelling/
        
       | spenvo wrote:
       | Like several other commenters in this thread, I also wrote[0]
       | something recently on a related topic: Google's AI Overviews and
       | ChatGPT harm the discovery of long tail information - from a
       | product builder's perspective. Basically, users are having a
       | tougher time finding accurate info about your product ( _even if
       | the correct answer to their query is in Google 's own search
       | results_). And I also found the basic tier of ChatGPT
       | hallucinated my app's purpose in a way that was borderline
       | slanderous. AI can make it tougher (at scale) for creators trying
       | to break through.
       | 
       | [0] - https://keydiscussions.com/2025/02/05/when-google-ai-
       | overvie...
        
         | diggan wrote:
         | > And I also found the basic tier of ChatGPT hallucinated my
         | app's purpose in a way that was borderline slanderous.
         | 
         | I'm curious about this, what exactly did ChatGPT write and how
         | was it borderline slanderous? Sounds like a big danger.
        
           | spenvo wrote:
           | So ChatGPT seemingly guessed its purpose just from its name.
           | Its name is CurrentKey Stats, and it inaccurately described
           | it as an app that kept stats on the current keyboard keys you
           | were pressing, so essentially a key logger, which is again
           | completely wrong. I was actually somewhat hesitant to make
           | this comment out of the fear that the next AI models will
           | train on it and reinforce the false idea that that's what my
           | app is. Sad times
        
             | ben_w wrote:
             | FWIW, one thing they're pretty competent at is sentiment
             | analysis, so if they read your comment, even in isolation,
             | what they'll probably learn that reading that kind of thing
             | into a name is really bad in general.
             | 
             | It's not like the bad old days where sentiment analysis was
             | a bag of words model, add up all the "positive" words and
             | subtract from that total the number of "negative" words --
             | back then, they would mis-identify e.g. "Product was not as
             | described, it did not come with batteries, and the surface
             | wasn't even painted" as "this review favours this product"
             | because they couldn't handle "not" or "wasn't".
        
             | jonas21 wrote:
             | You may be happy to learn that I just asked ChatGPT (the
             | free tier that you can access without signing in) "What
             | does the CurrentKey Stats app do?" and got this back:
             | 
             | ----
             | 
             |  _CurrentKey Stats is a macOS application designed to
             | enhance your productivity by offering detailed insights
             | into your app usage and providing advanced management of
             | your Mac 's virtual desktops, known as "Spaces."_
             | 
             |  _Key Features:_
             | 
             |  _Time Tracking: Unlike macOS 's Screen Time, which tracks
             | total app usage, CurrentKey Stats monitors the time each
             | app spends in the foreground, offering a more accurate
             | representation of active usage._
             | 
             |  _Room Management: The app allows you to assign custom
             | names and unique menu bar icons to each Space, facilitating
             | easy identification and navigation. You can switch between
             | these "Rooms" using the menu bar icon or customizable
             | hotkeys._
             | 
             |  _Automation Support: For advanced users, CurrentKey Stats
             | supports AppleScript, enabling you to automate actions when
             | entering or leaving specific Rooms. This feature can
             | trigger scripts to perform tasks like launching
             | applications, adjusting system settings, or controlling
             | smart home devices._
             | 
             |  _Privacy-Focused: The application prioritizes user privacy
             | by keeping all data local on your device, ensuring that
             | your usage statistics are not shared externally._
             | 
             |  _By combining detailed app usage statistics with enhanced
             | Space management and automation capabilities, CurrentKey
             | Stats aims to help you use your Mac more efficiently and
             | deliberately._
             | 
             | ----
             | 
             | But more generally, I think people often look at LLMs and
             | assume that the current flaws will be around forever and
             | this will be horrible. But all technologies have big,
             | obvious flaws when they're first introduced and these get
             | fixed because there's a strong incentive to have a less bad
             | product.
        
               | spenvo wrote:
               | That's good. When i wrote the article, I was able to get
               | a similar response when i used chatgpt "search", but not
               | with the basic tier default prompt with the prompt "i own
               | a Mac, would currentkey stats be good for me?". Were you
               | using the default chatgpt prompt here or chatgpt "search"
               | or a better model?
        
               | jonas21 wrote:
               | I went to https://chatgpt.com, typed "What does the
               | CurrentKey Stats app do?" into the box, and pressed
               | Enter.
        
               | spenvo wrote:
               | Cool - thanks for giving it a shot, and I'm glad the
               | basic tier is giving an accurate response with that query
        
               | Kye wrote:
               | The model and features available to it matters.
               | 
               | o1 says "I'm not aware of a widely recognized or
               | universally known product called "CurrentKey Stats," so
               | it may not be a mainstream or highly publicized app.
               | There are a few possibilities, though:" and then some
               | guesses.
               | 
               | o3-mini is similar: "I couldn't find any widely
               | recognized information about an app specifically called
               | CurrentKey Stats. It's possible that:"
               | 
               | When I turn on its shiny new search capability, it
               | correctly identifies and summarizes the app.
               | 
               | o3-mini-high, search turned off, asks for clarification.
               | 
               | Which means the default free 4o is quietly doing
               | retrieval-augmented generation behind the scenes. I
               | thought o3-mini would search if it didn't know the topic,
               | but I might be misremembering.
        
               | spenvo wrote:
               | Fascinating. I would have thought the fancier models
               | would have definitely known about it. The app is six
               | years old, has ~120 ratings globally with ~4.5 stars,
               | with several articles written about it etc.. It should be
               | pretty widely available in the training data
        
               | Kye wrote:
               | So sometimes you can prime it with more detail to jostle
               | the right bits in its neural network. This works if it's
               | in there but out of reach of more basic prompts.
               | 
               | "CurrentKey Stats mac app"
               | 
               | Which got us to a: "Are you looking for details about a
               | specific Mac app, instructions on how to access its key
               | statistics, or something else entirely? Let me know so I
               | can help you better."
               | 
               | Then I just kept saying yes until:
               | 
               | "I appreciate your confirmation! To help me pinpoint
               | exactly what you're looking for regarding the CurrentKey
               | Stats Mac app, could you please specify one of the
               | following?"
               | 
               | all of it
               | 
               | https://gist.github.com/kyefox/7884ab533996d07baf72c28629
               | 42b...
               | 
               | I don't know whether this is accurate or a statistically
               | plausible guess. It will just make up a whole thing in
               | other contexts where that's useful or interesting.
               | 
               | Try: "Make up a random mac app"
               | 
               | Quantum Quirk is a whimsical macOS application that adds
               | a dash of unpredictability and fun to your desktop
               | experience. Inspired by the bizarre world of quantum
               | physics, it transforms routine actions--like moving or
               | resizing windows--into a mini interactive experiment. And
               | yes, it's powered by qibits--quirky, playful digital bits
               | that bring a unique twist to your workflow.
               | 
               | et cetera et cetera et cetera
               | 
               | "Make up a random mac app that tracks your quantum
               | superposition in the cosmic multiplex" is fun
               | 
               | Also "Pretend you're a charlatan using the popular notion
               | of the ambiguity of "quantum" to swindle people. Sell me
               | on your quantum products."
               | 
               | Play the mark and work "I'm shaking in my qibits" in.
        
         | pjc50 wrote:
         | AI is going to solidify "convention wisdom" and "common sense"
         | as whatever the AI says. That's why there's such a fight over
         | what assumptions and biases get baked into that.
        
         | crazygringo wrote:
         | > _Basically, users are having a tougher time finding accurate
         | info about your product (even if the correct answer to their
         | query is in Google 's own search results)._
         | 
         | That's a gigantic "even if".
         | 
         | In my experience, I'm able to find stuff much easier with LLM's
         | that Google search _couldn 't_ surface.
         | 
         | If I'm looking for a product that does exactly X, Y but doesn't
         | Z, keyword search can be pretty terrible. LLM's actually
         | understand what I'm looking for, and have a much higher
         | probability of pointing me to it.
        
           | spenvo wrote:
           | Yeah, i have also found LLMs useful, and sometimes with broad
           | search queries. Which makes it quite the paradox when you own
           | a product on the other side of bad LLM results
        
       | mring33621 wrote:
       | I don't know how this is surprising.
       | 
       | LLM-provided solutions will reinforce existing network effects.
       | 
       | Things that are popular will have more related content...
        
       | __MatrixMan__ wrote:
       | Can confirm, I recently gave up on learning anything new re: data
       | visualization and have just been using matplotlib instead.
       | Training data for it has been piling up since 2008. The AI's are
       | so good at it that you hardly ever have to look at the code, just
       | ask for changes to the graph and iterate.
       | 
       | Honestly it's been kind of fun, but I do feel like the door is
       | closing on certain categories of new thing. Local maxima are
       | getting stickier, because even a marginal competence is enough to
       | keep you there--since the AI will amplify that competence in
       | well-trained domains by so much.
       | 
       | Emacs lisp is another one. I'd kind of like to build a map of
       | these.
        
         | hobs wrote:
         | Even trivial requirements they fail at when my boss says
         | something and I have no idea if MPL supports it - visualization
         | and visual mediums with complex interconnected bits are
         | actually one of the harder things to do with a text/programmer
         | basis... we literally fail to write tests to validate visual
         | behavior all the time without tools like selenium.
        
       | __MatrixMan__ wrote:
       | The Arrows of Time by Greg Egan (Orthogonal, Book 3) deals with
       | something analogous to this: Our characters must break themselves
       | out of a cycle which is impeding innovation. If you like your
       | scifi hard, the Orthogonal series is a lot of fun.
        
       | stevemadere wrote:
       | This is truly terrible.
       | 
       | What happened to a new JS front end library every week?
       | 
       | If this keeps up, we won't get to completely throw away all of
       | our old code and retool every two years (the way we've been
       | operating for the last 20 years)
       | 
       | How will we ever spend 85% of our time spinning up on new js
       | front end libraries?
       | 
       | And don't even get me started on the back end.
       | 
       | If AI had been around in 2010, we probably still have some people
       | writing apps in Rails.
       | 
       | OMG what a disaster that would be.
       | 
       | It's a good thing we just completely threw away all of the work
       | that went into all of those gems. If people had continued using
       | them, we wouldn't have had the chance to completely rewrite all
       | of them in node and python from scratch.
        
         | beej71 wrote:
         | PHP everywhere?
        
         | dehrmann wrote:
         | New JS frameworks every week stopped around the time React
         | became popular.
        
         | wruza wrote:
         | As if web/css wasn't a new gui library no one asked for. Peak
         | programming was VB6 & Delphi 6 (maybe 7). Everything after that
         | was just treading water in increasingly degenerate ways.
        
           | mouse_ wrote:
           | Eh. Web is the best write-once-run-everywhere we've achieved
           | so far, especially with the proliferation of WASM. I'd be
           | lying if I said it was perfect, but it's better than Java.
        
             | mrguyorama wrote:
             | >Web is the best write-once-run-everywhere we've achieved
             | so far,
             | 
             | Web for a decade or more now has been "Rewrite a hundred
             | times, run only in chrome"
        
         | Me000 wrote:
         | Some real revisionist history as Rails cribbed most of those
         | gems from Python. Now Python just rebranded for web and its
         | doing everything Rails does and more.
        
         | halfmatthalfcat wrote:
         | [flagged]
        
           | alfalfasprout wrote:
           | This website is mostly full of very junior developers that
           | just click on articles they think "sound smart" or are a hot
           | take they agree with.
           | 
           | Don't get me wrong, it's also one of the few places where you
           | find experts from all sorts of industry mingling. But the
           | quality of commentary on HN has plummeted in the last 10
           | years.
        
             | dang wrote:
             | " _Please don 't sneer, including at the rest of the
             | community._" It's reliably a marker of bad comments and
             | worse threads.
             | 
             | https://news.ycombinator.com/newsguidelines.html
        
           | dang wrote:
           | Please don't respond to a bad comment by breaking the site
           | guidelines yourself. That only makes things worse.
           | 
           | https://news.ycombinator.com/newsguidelines.html
        
             | halfmatthalfcat wrote:
             | My bad dang
        
         | whoknowsidont wrote:
         | >What happened to a new JS front end library every week?
         | 
         | Yeah I don't think this ever happened.
        
         | beepbooptheory wrote:
         | The implication here that AI itself does not come with its own
         | churn and needless wheel spinning feels a little out of touch
         | with our current reality.
        
       | carlosdp wrote:
       | I don't think this is a bad thing. Pretty much all of the
       | author's examples of "new and potentially superior technologies"
       | are really just different flavors of developer UX for doing the
       | same things you could do with the "old" libraries/technologies.
       | 
       | In a world where AI is writing the code, who cares what libraries
       | it is using? I don't really have to touch the code that much, I
       | just need it to work. That's the future we're headed for, at
       | lightning speed.
        
         | 3D30497420 wrote:
         | The problem is if that code hasn't already been written in some
         | form or another, then the LLM is much less effective at giving
         | recommendations.
         | 
         | I've been playing around with embedded systems, specifically
         | LoRa libraries on ESP32s. Code from LLMs is next to useless for
         | a lot of what I'm trying to do since it is relatively niche.
        
         | pphysch wrote:
         | > In a world where AI is writing the code, who cares what
         | libraries it is using? I don't really have to touch the code
         | that much, I just need it to work. That's the future we're
         | headed for, at lightning speed.
         | 
         | This attitude works for write-and-forget workflows where the
         | only thing that matters is whether it returns the answer you
         | want (AKA "hacking it").
         | 
         | Once you add in other concerns: security, performance,
         | maintainability, it can fall apart.
        
           | hypothesis wrote:
           | > Once you add in other concerns: security, performance,
           | maintainability, it can fall apart.
           | 
           | Does anyone care about that? E.g. CRWD is all time high,
           | after all. There is zero need to change anything according to
           | market.
        
             | pphysch wrote:
             | > Does anyone care about that?
             | 
             | Yes
        
               | hypothesis wrote:
               | That's not much to counter my example.
        
         | tikhonj wrote:
         | > _who cares what libraries it is using?_
         | 
         | Presumably the people who have to read, debug and maintain the
         | resulting garbage.
         | 
         | Then again, we have so much garbage code _before_ LLMs, that it
         | was clearly never _that_ important.
        
       | tobyhinloopen wrote:
       | I noticed this as I experimented with alternatives for React and
       | all of them I tried were terrible on OpenAI/ChatGPT. Either it
       | doesn't know them, or it makes weird mistakes, or uses very
       | outdated (no longer working) versions of the code.
       | 
       | It is also annoying that most modern JS things have 4 versions to
       | do the same thing: With TS, With TS + Decorators, With plain JS,
       | with JSX, etc. so code generation picks one that isn't compatible
       | with the "mode" you use.
        
         | thewebguyd wrote:
         | I've noticed ChatGPT/GH Copilot is also particularly bad at
         | PowerShell (I do sysadminy things), especially anything to do
         | with the MS Graph API.
         | 
         | It just makes up Cmdlets a lot of the time. If you prod it
         | enough though it will eventually get it right, which strikes me
         | as odd, it's like the training data was just full of really bad
         | code.
         | 
         | By contrast, anything I've asked it to do in Python has been
         | more or less spot on.
         | 
         | I fear that in the future the choice of tech stack is going to
         | be less on the merits of the stack itself and more centered
         | around "Which language and framework does ChatGPT (or other AI)
         | produce the best output for"
        
       | lcfcjs6 wrote:
       | There is an enormous fear from mainstream media of AI, but the
       | thing that excites me the most about this is in health care. AI
       | will find the cure to Alzeimers and countless other diseases,
       | there's no doubt about it. This simple fact is enough to make it
       | acceptable.
        
       | pphysch wrote:
       | I don't think this is unique to AI. There are categories of
       | knowledge that are infested with bad practices (webdev,
       | enterprise software), and even a direct web search will lead you
       | to those results. AI definitely regurgitates many of these bad
       | practices, I've seen it, but it's not obvious to everyone.
       | 
       | I think it's unrealistic to expect a general purpose LLM would be
       | an practical expert in a new field where there are potentially 0
       | human practical experts.
        
       | ripped_britches wrote:
       | This should not be relevant with cursor being able to include
       | docs in every query. For those who don't use this I feel for ya.
        
       | tolerance wrote:
       | So what.
       | 
       | ...if society continues to delegate more of their work to AI then
       | we are going to fall back into the grips that inform us that some
       | people are better at things than other people are and some are
       | worse at things than other people are and this is what lies
       | beneath the bridge of relying or not relying on AI to leverage
       | your capacity to think and act on what you feel.
       | 
       | I think that People who will be willing to put in effort for
       | their crafts _without AI_ will be the ones who will be willing to
       | try out new things and seek opportunities for ingenuity in the
       | future. I think that the problem people have with this idea is
       | that it runs counter to notions related to-- _ahem_ --
       | 
       | diversity, equity and inclusion...
       | 
       | On one hand and on it's little finger is the legitimate concern
       | that if companies who develop LLMs are not transparent with the
       | technologies they make available to users when generating code,
       | then they'll hide all the scary and dangerous things that they
       | make available to the people who'll think, act and feel corrupt
       | regardless of the tools they wield to impose disadvantages onto
       | others. But I don't think that will make a difference.
       | 
       | The only way out is hard work in a world bent on making the work
       | easy after it makes you weak.
        
         | yapyap wrote:
         | I think it's very easy to say people dislike the notion you
         | said cause it goes against DEI (the e stands for equality btw),
         | like it's such an easy scapegoat.
         | 
         | People just don't wanna put the work in, or aren't able to put
         | the work in cause they are busy surviving day to day, y'know,
         | putting food on the table. Cause that is not a given for
         | everyone.
        
           | michaelcampbell wrote:
           | > DEI (the e stands for equality btw),
           | 
           | According to whom? https://en.wikipedia.org/wiki/Diversity%2C
           | _equity%2C_and_inc...
        
           | tolerance wrote:
           | I know I didn't make it easy to catch, but I think you may
           | have misread me.
           | 
           | I wasn't referring to "DEI" as in the corpo-state initiative
           | but the concepts themselves as they're received independent
           | of how they're packaged in the acronym; in a political
           | context.
           | 
           | In this way, I think to call it "scapegoating" would do a
           | disservice to a legitimate social conflict.
           | 
           | I agree with your final observation in general, but what's
           | your point?
        
       | Rehanzo wrote:
       | Does anyone know what font is used here?
        
         | esafak wrote:
         | https://fonts.google.com/specimen/Lexend
        
       | anal_reactor wrote:
       | Not a problem. I'm sure that being able to work well with new
       | information is the next goal most researchers are working
       | towards, so the entire post feels like a boomer complaining
       | "computers are bad because they're big and bulky" thirty years
       | ago, not being able to imagine the smartphone revolution.
        
       | benve wrote:
       | I think this is true because I myself said to myself: "it is
       | useless for me to create a library or abstraction for the
       | developers of my project, much better to use everything verbose
       | using the most popular libraries on the web". Until yesterday
       | having an abstraction (or a better library/framework) could be
       | very convenient to save time in writing a lot of code. Today if
       | the code is mostly generated there is no need to create an
       | abstraction. AI understands 1000 lines of code in python pandas
       | much better than 10 lines of code using my library (which
       | rationalises the use of pandas).
       | 
       | The result will not only be a disincentive to use new
       | technologies, but a disincentive to build products with an
       | efficient architecture in terms of lines of code, and in
       | particular a disincentive to abstraction.
       | 
       | Maybe some product will become a hell with millions of lines of
       | code that no one knows how to evolve and manage.
        
         | gavmor wrote:
         | Wow, and this posture _doesn 't_ apply to junior developers, ie
         | a good abstraction is needed to avoid overwhelming the human
         | "context window."
         | 
         | But it is a shame--and possibly an existential risk--that we
         | then begin to write code that can _only_ be understood via LLM.
        
         | hobs wrote:
         | This is completely wrong and assumes that an LLM is just much
         | better at its job than it is - an LLM doesn't do better with a
         | chaotic code base, nobody does - a deeply nonsensical system
         | that sort of works is by far the hardest to reason about if you
         | want to fix or change anything, especially for a thing that has
         | subhuman intelligence.
        
           | baq wrote:
           | LLMs work best matching patterns. If 1k loc matches patterns
           | and the 10 loc doesn't, it's a problem.
           | 
           | The only thing the OP is missing which combines the best of
           | both worlds is to always put source of and/or docs for his
           | abstractions into the context window of the LLM.
        
             | xmprt wrote:
             | If your abstractions match common design patterns then
             | you've solved your problem. It's ridiculous to assume that
             | an LLM will understand 1k LOC of standard library code
             | better than 10 lines of a custom abstraction which uses a
             | common design pattern.
             | 
             | It's more prone to hallucinating things if your custom
             | abstraction is not super standard but at least you'd be
             | able to check its mistakes (you're checking the code
             | generated by your LLMs right?). If it makes a mistake with
             | the 1k LOC then you're probably not going to find that
             | error.
        
               | baq wrote:
               | LLMs are not human, they see the whole context window at
               | once. On the contrary it's ridiculous to assume
               | otherwise.
               | 
               | I'll reiterate what I said before: put the whole source
               | of the new library in the context window and tell the LLM
               | to use it. It will, at least if it's Claude.
        
               | xmprt wrote:
               | Attention works better on smaller contexts since there's
               | less confounding tokens so even if the LLM can see the
               | entire context, it's better to keep the amount of
               | confounding context lower. And at some point the source
               | code will exceed the size of the context window; even the
               | newer ones will millions of tokens of context can't hold
               | the entirety of many large codebases.
        
               | baq wrote:
               | Of course, but OP's 1kloc is nowhere near close to any
               | contemporary limit. Not using the tool for what it's
               | designed because it isn't designed for a harder problem
               | is... unwise.
        
         | esafak wrote:
         | > Maybe some product will become a hell with millions of lines
         | of code that no one knows how to evolve and manage.
         | 
         | That is exactly what will happen, so why would you do that?
        
           | baq wrote:
           | On the other hand you should ask yourself why do you care? If
           | you assume no human will ever read the code except in very
           | extraordinary circumstances, why wouldn't you do that?
        
         | CerebralCerb wrote:
         | Only in one sense. As code is now cheaper, abstractions meant
         | to decrease code quantity have decreased in value. But
         | abstractions meant to organize logic to make it easier to
         | comprehend retains its value.
        
       | JimboOmega wrote:
       | Has there been any progress or effort on solving the underlying
       | problem?
       | 
       | I'm not entirely sure why AI knowledge must be close to a year
       | old, and clearly this is a problem developers are aware of.
       | 
       | Is there are a technical reason they can't be, for instance, a
       | month behind rather than close to a year?
        
       | OutOfHere wrote:
       | Always get the response with and without a web search. The web
       | search may yield a newer solution.
       | 
       | Also, each package should ideally provide an LLM ingestible
       | document. Upload this for the LLM, and have it answer questions
       | specific to the new package.
        
       | jayd16 wrote:
       | It's pretty interesting and mildly shocking that everyone is just
       | making the same 'who needs a new JS library' joke.
       | 
       | What about closed source tooling? How do you expect an AI to ever
       | help you with something it doesn't have a license to know about?
       | Not everything in the world can be anonymously scraped into the
       | yearly revision.
       | 
       | If AI is going to stay we'll have to solve the problem of
       | knowledge segmentation. If we solve that, keeping it up to date
       | shouldn't be too bad.
        
         | impure-aqua wrote:
         | >What about closed source tooling? How do you expect an AI to
         | ever help you with something it doesn't have a license to know
         | about? Not everything in the world can be anonymously scraped
         | into the yearly revision.
         | 
         | This is not a novel problem. Proprietary toolchains already
         | suffer from decreased resources on public forums like Stack
         | Overflow; AI did not create this knowledge segmentation, it is
         | scraping this public information after all.
         | 
         | >It's pretty interesting and mildly shocking that everyone is
         | just making the same 'who needs a new JS library' joke.
         | 
         | Surely the proprietary toolchain is itself the 'new JS
         | library'?
         | 
         | Most developers I know don't enjoy working with esoteric
         | commercial solutions that suffer from poor documentation, when
         | there exists an open-source solution that is widely understood
         | and freely documented.
         | 
         | I do not see why AI code generation further incentivizing the
         | use of the open-source solution is a problem.
        
           | jayd16 wrote:
           | > This is not a novel problem. AI did not create this
           | knowledge segmentation, it is scraping this public
           | information after all.
           | 
           | I think you misunderstand the situation. You as a person can
           | be privy to private knowledge. Relying on AI enough that you
           | can't use that private knowledge is the novel situation.
           | 
           | > I do not see why AI code generation further incentivizing
           | the use of the open-source solution is a problem.
           | 
           | Maybe you don't get it and have never experienced it but
           | there's a missive amount of development done against
           | unreleased APIs or hardware. Game engines, firmware, etc. I
           | doubt Apple is going to publish new SDKs for their new
           | widgets long before any devs use them.
        
       | montjoy wrote:
       | The lack of new training data also makes it bad at projects that
       | are still maturing because it will suggest outdated code - or
       | worse it will mix/match old and new syntax and generate something
       | completely broken.
       | 
       | I worry that the lack of new examples for it to train on will
       | self-reinforce running old syntax that has bad patterns.
       | 
       | If the "AI" could actually store its mistakes and corrections
       | from interactive sessions long-term I think it would greatly
       | alleviate this problem, but that opens up another whole set of
       | problems.
        
       | NiloCK wrote:
       | I, too, wrote a shittier version of this a little while back:
       | https://www.paritybits.me/stack-ossification/
       | 
       | Another observation since then: good documentation for newer tech
       | stacks will _not_ save the LLM 's capabilities with that tech. I
       | think the reason, in short, is that there's no shortcut for
       | experience. Docs are book learning for tech stacks - millions
       | (billions) of lines of source code among the training data are
       | something else entirely.
        
       | richardw wrote:
       | I tried a new agent library with a model a few weeks ago. Just
       | pasted the relevant api docs in and it worked fine.
       | 
       | However, while I'm proud of the outcomes, I'm not proud of the
       | code. I'm not releasing anything open source until I feel it's
       | mine, which is another step. I'd be a bit embarrassed bringing
       | another dev on.
       | 
       | "I'm Richard and I'm using AI to code" Support Group: "Hi
       | Richard"
        
       | slevis wrote:
       | Looks like I might be the minority, but I disagree with this
       | prediction. Better models will also be better at abstracting and
       | we have seen several examples (e.g. the paper LIMO: Less is More
       | for Reasoning) that with a small amount of training data, models
       | can outperform larger models.
        
       | g9yuayon wrote:
       | > Once it has finally released, it usually remains stagnant in
       | terms of having its knowledge updated....meaning that models will
       | not be able to service users requesting assistance with new
       | technologies, thus disincentivising their use.
       | 
       | I find such argument weak. We can say the same thing about a
       | book, like "Once The Art of Computer Program is finally
       | published, it usually remains stagnant in terms of having its
       | knowledge updated, thus disincentivizing people to learn new
       | algorithms".
        
       | casey2 wrote:
       | Truly and honestly 99% of developers haven't even heard of
       | chatgpt or copilot let alone the general public. It's a self-
       | emposed problem on the orgs that choose to use such tools. More
       | to the point, recency bias is so much stronger I'd rather have a
       | system that points people to the current correct solution than a
       | slightly better solution that is somehow harder to understand
       | despite it's claimed simplicity by fanatics.
        
       | ilrwbwrkhv wrote:
       | > However, a leaked system prompt for Claude's artifacts feature
       | shows that both React and Tailwind are specifically mentioned.
       | 
       | Damn.
        
       | cushychicken wrote:
       | ...Isn't this the website that constantly encourages people to
       | "choose boring technology" for their web tech startups?
       | 
       | Aren't a reasonable portion of the readers here people who bemoan
       | the constant learning curve hellscape of frontend development?
       | 
       | And now we're going to be upset that tools that help us work
       | faster, which are trained on data freely available on the
       | internet and thus affected by the volume of training material,
       | decide to (gasp) _choose solutions with a greater body of
       | examples?_
       | 
       | Just can't satisfy all the people all the time, I guess! SMH.
        
       | crazygringo wrote:
       | No, AI isn't.
       | 
       | Any new tech, or version upgrade, or whatever, takes time for
       | _people_ to become familiar with it. You might as well say
       | "Stack Overflow is stifling new tech adoption" because brand-new
       | stuff doesn't have many Q's and A's yet. But that would be a
       | silly thing to say.
       | 
       | I'm not going to adopt a brand-new database _regardless_ of LLM
       | training data cutoff, just because enough _people_ haven 't had
       | enough experience with it.
       | 
       | And LLM's have a commercial incentive to retrain every so often
       | anyways. It's not like we're going to confront a situation where
       | an LLM doesn't know anything about tech that come out 5 or 10
       | years ago.
       | 
       | Early adopters will be early adopters. And early adopters aren't
       | the kind of people relying on an LLM to tell them what to try
       | out.
        
         | TheTaytay wrote:
         | I feel myself taking AI's base knowledge of a tech stack into
         | account when I work. Otherwise it feels like I am swimming
         | upstream. I can't be the only one, and this article resonated
         | for me.
        
           | crazygringo wrote:
           | Sure, but what I'm saying is that's not where the knowledge
           | bottleneck is.
           | 
           | The knowledge bottleneck is in the _human population_ , which
           | is _then_ reflected in Stack Overflow and blogs, which is
           | _then_ reflected in LLM 's.
           | 
           | LLM's aren't doing anything special to stifle new tech
           | adoption. New tech is harder to adopt _because it 's new
           | tech, because people in general are less familiar with it_.
           | 
           | (And there's a little bit of a training delay in LLM's
           | updating, but that's a minor aspect here compared to the
           | years it takes for a new Python package to become popular,
           | for example.)
        
             | hinkley wrote:
             | It's generally true that the people who come up with the
             | first interesting idioms in a new space get an outsized
             | influence on the community. It takes a while for people to
             | borrow or steal their ideas and run with them.
             | 
             | On the plus side while the AI can't write in that language
             | there's money to be made doing the work organically.
        
         | milesvp wrote:
         | Strong disagree here. I've been trying to learn Zig, and I'm
         | thwarted enough by chatgpt giving me outdated information on
         | Zig's unstable API that if I didn't have a strong incentive to
         | learn it for it's cross compiler, I'd likely turn my efforts
         | towards another language. This effect can greatly alter the
         | adoption curve of a new tech, which can leave it dying on the
         | vine.
         | 
         | You're not wrong though, in that Stack Overflow has the exact
         | same problem. The main difference is that with Stack Overflow,
         | there was a bonus in becoming the first expert on the platform,
         | so while it does stifle new tech adoption, it at least
         | encourages new tech content creation, which in turn encourages
         | new tech. Though, I don't know if it's a net positive or
         | negative in aggregate.
         | 
         | I think this problem will likely lessen as training becomes
         | cheaper and faster. But right now, there is really strong
         | incentives to avoid any tech that has had breaking changes in
         | the last 3 years.
        
           | throwup238 wrote:
           | Have you tried Cursor instead of a chatbot? I don't trust it
           | to do much coding for me but it has this feature where you
           | give it the URL of the documentation, and it indexes it for
           | you. Then you can reference that documentation in the chat
           | with the @ symbol and Cursor applies RAG to the indexed
           | content to include relevant documentation. I usually index
           | autogenerated API docs and any website docs the library has
           | like guides and tutorials.
           | 
           | The experience is night and day and it's the only reason I
           | pay for Cursor on top of OpenAI/Anthropic. The answers it
           | gives are much more accurate when including documentation and
           | it helps a lot with exploring APIs. Probably helps a lot with
           | the code generation too, but I don't use Composer as much.
        
           | hinkley wrote:
           | Someone recently put it that it's the first 10% of adoption
           | that sets the future for a product.
           | 
           | There are people online calling it the "tech adoption cycle"
           | but this is a concept I encountered in a literal Business 101
           | class. 2.5% of the population are Innovators. 12.5 are early
           | adopters. Then there's 70% in the middle where most of your
           | cash comes in, and by the time the laggards hit you're
           | optimizing for cost per unit because you've had to drop the
           | price so much due to competition from copycats and from the
           | next new thing.
           | 
           | So by the time 60% of your early adopters are onboard it's
           | already been decided if you're on a rocket ship or this is
           | just a burp.
           | 
           | Early adopters have a high tolerance for inconveniences but
           | it's not infinite. If they bleed enough they will find
           | something else, and then you are DOA.
        
           | _puk wrote:
           | Having spent the past week deep in cursor, it's amazing for
           | building out a basic web app.. but getting it to a point of
           | productivity takes a while..
           | 
           | Command line install for latest svelte.. nope npx install is
           | now deprecated, have to do it another way.. ok, let's go old
           | school and read the docs.
           | 
           | Great, it's up and running, but nope, Svelte has just hit V5
           | and the LLM isn't aware of the changes.. ok, do I drop back
           | to 4 on a new code write, or spend the time to get a decent
           | .cursorrules in place to make sure it's using the right
           | version. Ok, that's in, but look tailwind is too new too..
           | ok, fine let's get that into .cursorrules..
           | 
           | Oh look, daisy has a v5 coming in 15 days, luckily I got in
           | there just in time..
           | 
           | I thought svelte, tailwind and daisy were the one true way
           | nowadays!
           | 
           | I now have a rule in my cursorrules that asks for any errors
           | that I spot in the code (related to wrong versions) results
           | in both the fix and a rule for the cursorrules so it doesn't
           | happen again. That works well.
        
             | mattmanser wrote:
             | The unfortunate truth is that the one true way these days
             | is React, and if you're doing anything else you're in for a
             | world of pain.
        
           | jeremyjh wrote:
           | Imagine a world without ChatGPT or Stack Overflow. Is it
           | easier to build your Zig project in that world?
        
             | bangaladore wrote:
             | The simple answer is no.
             | 
             | However, it's naive to deny that people are driven to
             | technology with large amounts of samples and documentation.
             | When trained on it, LLMs can essentially create infinite,
             | customized versions of these.
             | 
             | LLMs artificially inflate the amount of documentation and
             | samples for older technologies, and many lack the knowledge
             | of newer ones. This creates a feedback loop until the newer
             | technology is included, but the damage has already been
             | done.
             | 
             | That's just a perspective, I don't know that I fully agree
             | with the premise, and certainly don't know a solution. The
             | best solution I can think of is more LLMs that are
             | essentially constantly RAGing from the internet, rather
             | than mostly using pre-trained info. Obviously this part is
             | no longer just the model, but I imagine models could be
             | made to make the RAG process more efficient, or better.
        
             | camdenreslink wrote:
             | No, but the relative ease of picking up Zig is lower
             | compared to another language that has a lot of training
             | data (maybe golang could be used for similar projects) in
             | the LLM world. So users might be less reluctant to pick up
             | Zig at all now.
             | 
             | That could slow down adoption of new languages, frameworks,
             | libraries etc.
        
           | ysofunny wrote:
           | or you could try leraning zig in the old-school way, the way
           | we used to learn before LLMs
           | 
           | who am I kidding? LLMs have changed the game forever. making
           | new computer programming languages is no longer
           | professionally useful, like hand-knitting
        
           | lasagnagram wrote:
           | Have you considered learning by reading documentation and
           | tutorials, instead of asking the lie generator? I mean, how
           | do you think people learned things before 2023?
        
             | williamcotton wrote:
             | What about a library that doesn't have any documentation or
             | tutorials, like the Soundfont lib for JUCE where the LLM
             | was essentially I had to get a working product?
        
               | goatlover wrote:
               | Where did the LLM get information for the Soundfront lib
               | if there is no documentation, tutorials, answered
               | questions or source code online?
        
               | daveguy wrote:
               | Same place every LLM gets "information" -- plucking
               | random bits from the latent space that are most likely to
               | follow the input context.
        
               | swatcoder wrote:
               | Let's dig into this for a second.
               | 
               | Which "Soundfont lib for JUCE" are you talking about? How
               | did you find it in the first place? Why did you decide to
               | use it if there was no documentation? How did you know it
               | was appropriate or capable for your use case? How did you
               | know it was mature, stable, or safe? Did you read the
               | headers for the library? Did they have comments? Did you
               | read the technical specification for the Soundfont format
               | to understand how it modeled sample libraries?
               | 
               | I work in this stuff all the time, and I'm so very
               | puzzled by what you're even suggesting here.
        
       | datadrivenangel wrote:
       | This is the same problem as google/search engines: A new
       | technology has less web presence, and thus ranks lower in the
       | mechanisms for information distribution and retrieval until
       | people put in the work to market it.
        
       | skeeter2020 wrote:
       | I don't agree, because the people using these tools for their
       | wore work were never doing innovative tech in the first place.
        
       | nektro wrote:
       | developers using ai continue to find new and novel ways to make
       | themselves worse
        
       | janalsncm wrote:
       | I've been out of web dev for a while, but maybe the problem is
       | there's a new framework every 6 months and instead of delivering
       | value to the end user, developers are rewriting their app in
       | whatever the new framework is.
        
         | cluckindan wrote:
         | Nice stereotype. Does it hold water?
         | 
         | I've been using the same backend framework professionally for
         | over 10 years, and the same frontend framework for over 6
         | years. Clearly your thoughts on the matter are not reflective
         | of reality.
        
           | loandbehold wrote:
           | Yes. If you have a working system built on 10 years old web
           | framework, it's considered obsolete and in need of being
           | upgraded/rewritten. Why? Imagine houses needing to be rebuild
           | because its foundation is 10 years old.
        
             | cluckindan wrote:
             | Not talking about a single system here. The framework has
             | gone through five major versions in that time and my
             | projects usually last a couple months to a couple years.
        
         | bmurphy1976 wrote:
         | That's only part of the problem. Complexity and expectations
         | have exploded in the last decade, while investment is getting
         | tighter and tighter (for the average corporation anyway, not
         | necessarily the big boys).
         | 
         | The constant frameworks churn is one attempt at solving the
         | complexity problem. Unfortunately I don't think it's working.
        
         | throwup238 wrote:
         | I don't think that's really the case anymore. The vast majority
         | are on React, Vue, or Svelte (in order of my perception of
         | their popularity). On the CSS side it seems like Tailwind and
         | PostCSS has taken over. The heavier framework category is
         | covered by Next.js. Other than Next, most of that popularity
         | started solidifying in 2020. There are a bunch of newer
         | frameworks like Astro and HTMX and so on, but it doesn't seem
         | like their adoption is eating into much of the bigger player's
         | "market."
         | 
         | There's still the problem of many library authors playing fast
         | and loose with semver breaking changes and absolutely gigantic
         | dependency trees that exacerbate the issue. That's where most
         | of my churn comes from, not new frameworks.
        
           | datadrivenangel wrote:
           | Yeah but React is swapping major recommendations / styles
           | ever 18 months still. How do you create a react application
           | these days? Not create-react-app apparently?
        
             | promiseofbeans wrote:
             | NextJS for a SSR app, or Vite for a create-react-app style
             | SPA.
             | 
             | Vite has been great for the whole web-dev ecosystem, since
             | it's super easy to configure, and very pluggable, so most
             | frameworks are built around it now. That means you can
             | write, e.g. a new CSS preprocessor, and if you make a Vite
             | plugin, it works with (almost) every framework without
             | extra effort
        
               | zkldi wrote:
               | But this isn't even true, and NextJS is well into
               | egregiously complexity. Remix was an alternative option
               | in the space that is now deprecated in all-but-name for
               | React Router v7, which (for those just tuning back in),
               | react router is now a framework.
               | 
               | If you wrote your app in NextJS 2 years ago, you would
               | already have to rewrite chunks of it to get it to compile
               | today. These tools are NOT solidified, they are releasing
               | breaking changes at least once a year.
        
             | throwup238 wrote:
             | IMO create-react-app was a crutch for the last generation
             | of build systems like webpack which were egregiously
             | complex. Nowadays you just use Vite and start with whatever
             | Github template has all the tech you need. Even starting a
             | project from scratch is really simple now since the config
             | files to get started are tiny and simple. There's always
             | the problem of outdated tutorials on the internet, but as a
             | frontend dev spinning up a new project has never been
             | simpler.
             | 
             | The pace of development is definitely challenging. There's
             | so many libraries involved in any project that many fall
             | behind in updating their code and application devs get
             | caught in the crossfire when trying to upgrade (and woe be
             | to you if someone added react = "*" somewhere in their peer
             | dependencies).
        
           | benatkin wrote:
           | I think their combined market share is finally shrinking for
           | leading edge projects.
           | https://news.ycombinator.com/item?id=43008190
           | https://news.ycombinator.com/item?id=42388665 Also, AI makes
           | a lot of terrible mainstream stuff because the natural bias
           | is towards the mainstream. That's where I count its tendency
           | to default to React if I ask for frontend code without
           | further context.
        
         | graypegg wrote:
         | This is obviously a pretty common belief, but I do think web
         | dev gets held to a weird standard when framed as "the one with
         | too much tech churn".
         | 
         | Just because someone solves a problem with a new library or
         | framework does not mean they solved a problem for all of web
         | development, and I think the current concentration of
         | applications made with boring things sort of reflects that. [0]
         | 
         | > developers are rewriting their app in whatever the new
         | framework is.
         | 
         | That is obviously an over-exaggeration. Most devs, most teams,
         | most companies are not open to rewriting any application ONLY
         | because it's a new framework. If they are, they probably have
         | other priorities that point at X framework/library when
         | rewrites do happen, because a rewrite is big enough already
         | without having to also create the pattern/library code.
         | 
         | I will absolutely agree that we ignore the user more than we
         | should. That should change. But I think people being excited
         | about something on HN or some other metric of "a new framework
         | every 6 months" isn't as causative as the usual hive mind would
         | imply.
         | 
         | [0] https://www.statista.com/statistics/1124699/worldwide-
         | develo...
        
           | ehutch79 wrote:
           | It's a lot better now, but there was absolutely a period when
           | it _felt_ like there was a new framework /library weekly.
           | This was 5-10 years ago. 'Cambrian explosion' was the term
           | going around.
           | 
           | A lot of it was in react space, with router libraries or the
           | newest data store being the thing you needed to be using.
           | Definitely turned me off react, personally at least. The
           | angular.js/angular2 migration was also a big pain around this
           | time as well.
           | 
           | There was a lot of social pressure from influencers on
           | youtube and various social media that you NEEDED to switch
           | now. This was the new hotness, it was blazing fast, anything
           | else was obsolete tech debt. There was one instance of 'or
           | you should be fired' that sticks with me.
           | 
           | I think we're just used to the hyperbole and are all a lot
           | more jaded now.
           | 
           | Compare this to the backend, where django, rails, and the
           | others haven't really changed. I haven't felt the need or
           | pressure to rewrite my views/controllers/whatever at all.
        
             | crabmusket wrote:
             | > There was a lot of social pressure from influencers on
             | youtube and various social media that you NEEDED to switch
             | now
             | 
             | I wish I had something more coherent to say about this,
             | but: I think this is true, and it frustrates and saddens me
             | that anybody took seriously what influencers on youtube had
             | to say. It seems so obvious that they are there to get
             | views, but even beside that, that they don't know anything
             | about _my app_ or the problems _my users_ have.
        
         | squigz wrote:
         | This would be a symptom of bad management, not bad developers.
         | 
         | And to add to what others have said, this stereotype never
         | really held up in my experience either. Any serious web dev
         | shop is going to have the framework they use and stick with it
         | for both long- and short-term clients. And there are many
         | mature options here.
         | 
         | I don't doubt this happens, a lot, but again, I think it's more
         | about bad management than anything - and bad management will
         | always make bad tech decisions, no matter the topic.
        
         | amrocha wrote:
         | Can you name a single web framework with wide adoption that was
         | released in the last 6 months? I expect you to delete your
         | comment if you can't.
        
       | AlienRobot wrote:
       | >Consider a developer working with a cutting-edge JavaScript
       | framework released just months ago. When they turn to AI coding
       | assistants for help, they find these tools unable to provide
       | meaningful guidance because their training data predates the
       | framework's release. This forces developers to rely solely on
       | potentially limited official documentation and early adopter
       | experiences, which, for better or worse, tends to be an 'old' way
       | of doing things and incentivises them to use something else.
       | 
       | I can't help but feel that a major problem these days is the lack
       | of forums on the Internet, specially for programming. Forums
       | foster and welcome new members, unlike StackOverflow. They're
       | searchable, unlike Discord. Topics develop as people reply,
       | unlike Reddit. You're talking to real people, unlike ChatGPT. You
       | can post questions in them, unlike Github Issues.
       | 
       | When I had an issue with a C++ library, I could often find a
       | forum thread made by someone with a similar problem. Perhaps
       | because there are so many Javascript libraries, creating a
       | separate forum for each one of them didn't make sense, and this
       | is the end result.
       | 
       | I also feel that for documentation, LLMs are just not the answer.
       | It's obvious that we need better tools. Or rather, that we need
       | tools. I feel like before LLMs there simply weren't any universal
       | tools for searching documentation and snippets other than
       | Googling them, but Googling them never felt like the best method,
       | so we jumped from one subpar method to another.
       | 
       | No matter what tool we come up with, it will never have the
       | flexibility and power of just asking another human about it.
        
       | bilater wrote:
       | This is precisely why I have said that every new
       | framework/library should have a markdown or text or whatever is
       | the best format for LLM models endpoint that has all the docs and
       | examples in one single page so you can easily copy it over to a
       | models context. You want to make it as easy as possible for LLMs
       | to be aware of how your software works. The fancy nested
       | navigation guide walkthrough thing is cool for users but not
       | optimized for this flow.
        
         | vrosas wrote:
         | This is literally just SEO 2.0
        
         | DangitBobby wrote:
         | This is something I'd like to have for pretty much any
         | framework/library anyway.
        
       | amelius wrote:
       | This is like saying in the 90s that Google Search would stifle
       | tech adoption ...
       | 
       | I don't buy it. AI can teach me in 5 minutes how to write a
       | kernel module, even if I've never seen one. AI brings more tech
       | to our fingertips, not less.
        
         | IshKebab wrote:
         | Did you read the article? It makes valid points that your
         | comment doesn't address. It isn't a brainless "AI is making us
         | stupider" post.
        
           | amelius wrote:
           | Yes, but do search engines not have a bias towards existing
           | technologies?
        
             | IshKebab wrote:
             | Yes but they are much more up-to-date. And they don't have
             | the react issue he mentioned.
        
       | IshKebab wrote:
       | Huh how long until advertisers pay to get their product preferred
       | by AI? If it isn't already happening...
        
       | mxwsn wrote:
       | This ought to be called the qwerty effect, for how the qwerty
       | keyboard layout can't be usurped at this point. It was at the
       | right place at the right time, even though arguably its main
       | design choices are no longer relevant, and there are arguably
       | better layouts like dvorak.
       | 
       | Python and React may similarly be enshrined for the future, for
       | being at the right place at the right time.
       | 
       | English as a language might be another example.
        
         | pinoy420 wrote:
         | > arguably better layouts like dvorak
         | 
         | I don't think this has any truth outside of causing such
         | argument. I am sure there is a paper on it that showed
         | negligible difference for different layouts.
         | 
         | I used to use Dvorak but then I stopped when I was around 17?
         | Qwerty for life.
        
         | basch wrote:
         | It would be interesting if Apple and Google and Samsung agreed
         | to make dvorak (or a mutually created better mobile/swipe
         | variant) the default mobile keyboard for all NEW accounts going
         | forward. Unlike hardware, software keyboards could be swapped
         | in a generation or two.
        
         | jimmaswell wrote:
         | QWERTY is a poor example. The keyboard layout is not the
         | bottleneck for anyone who does not participate in olympic
         | typing competitions. DVORAK is just as arbitrary as QWERTY to
         | everyone else including professionals, and there's value in
         | backwards compatibility e.g. old keyboards don't become
         | e-waste.
        
           | rhet0rica wrote:
           | It sounds like you're thinking purely about the speed of a
           | skilled typist. Alternative keyboard layouts offer a tangible
           | ergonomic benefit even at lower WPM counts, and can have a
           | lower hunt-and-peck time for novices by clustering
           | frequently-used letters together. (This last effect is
           | particularly pronounced on small touch screens, where the
           | seek time is non-trivial and the buttons are much too close
           | together for any sort of real touch-typing.)
        
             | jimmaswell wrote:
             | I remember reaching a really high speed on the keyboard of
             | the original iPod Touch. It actually did feel like touch
             | typing - I didn't really have to look at the on-screen
             | keyboard. I can't pin down exactly what's been missing from
             | newer keyboard apps. Something about the ergonomics and UI
             | came together just right.
        
               | jazzyjackson wrote:
               | original iPod touch having a 3.5" screen probably had a
               | lot to do with it, with thumb typing a smaller keyboard
               | could be better - finer movements per keystroke. Modern
               | iPhone 14 is 6.1"
        
       | hinkley wrote:
       | I'm working on a side project that actually probably could use AI
       | later on and I'm doing everything I can not to "put a bird on it"
       | which is the phase we are at with AI.
       | 
       | I might be willing to use a SAT solver or linear algebra on it if
       | I ever get to that point but there's a lot else to do first. The
       | problem space involves humans, so optimizing that can very
       | quickly turn into "works in theory but not in practice". It'd be
       | the sort of thing where you use it but don't brag about it.
        
       | zombiwoof wrote:
       | Yup, python pretty much wins due to training data
        
       | hinkley wrote:
       | I don't like that this conclusion seems to be that if humans
       | adopt every new technology before AI can train on it that their
       | jobs will be more secure. That is its own kind of hell.
        
         | palmotea wrote:
         | > I don't like that this conclusion seems to be that if humans
         | adopt every new technology before AI can train on it that their
         | jobs will be more secure. That is its own kind of hell.
         | 
         | It's the hell we'll be forced into. The powers that be care not
         | one whit for our well being or comfort. We have to duck and
         | weave (or get smashed), while they "creatively" destroy.
        
           | lasagnagram wrote:
           | We have the power to destroy, too.
        
       | ausbah wrote:
       | i do wonder if this could be mitigated by sufficiently popular
       | newer libraries submitting training data of their library or
       | whatever in action
        
       | catapulted wrote:
       | There is a counter example for this: MCP, a standard pushed by
       | Anthropic, provides a long txt/MD optimized for Claude to be able
       | to understand the protocol, which is very useful to bootstrap new
       | plugins/servers that can be used as tools for LLMs. I found that
       | fascinating and it works really well, and I was able to one-shot
       | improve my CLInE extension (a coding agent similar to cursor.sh)
       | to work with existing APIs/data.
       | 
       | It's so easy to bootstrap that even though the standard is a
       | couple of months old, already has a few hundred (albeit probably
       | low quality) implementations to adapt to different services.
       | 
       | - txt/markdown for LLMs: https://modelcontextprotocol.io/llms-
       | full.txt
       | 
       | - server implementations:
       | https://github.com/modelcontextprotocol/servers#-community-s...
        
         | Animats wrote:
         | Now we're going to see sites specifically optimized to promote
         | something to AIs. It's the new search engine optimization.
        
           | delanyoyoko wrote:
           | Prompt Engine Optimization
        
       | yieldcrv wrote:
       | Eh a cooldown period between the fanfare of a new thing and some
       | battle testing before it gets added to the next AI's training set
       | is a good thing
       | 
       | the delay is like 8 months for now, thats fine
       | 
       | I think this is also great for some interview candidate
       | assessments, you have new frameworks that AI can't answer
       | questions about yet, and you can quiz a candidate on how well
       | they are able to figure out how to use the new thing
        
       | jgalt212 wrote:
       | If you can build an app that an AI cannot, then you know some
       | sort n-month head start on the competition.
        
       | owenversteeg wrote:
       | I think as new data gets vacuumed up faster, this will be less of
       | an issue. About a year ago here on HN I complained about how LLMs
       | were useless for Svelte as they did not have it in their training
       | data, and that they should update on a regular basis with fresh
       | data. At the time my comment was considered ridiculous. One year
       | later, that's where we are, of course; the average cutoff of "LLM
       | usefulness" with a new subject has dropped from multiple years to
       | months and I see no reason that the trend will not continue.
        
       | feoren wrote:
       | The answer to this seems obvious: continuous training of live
       | models. No more "cutoff dates": have a process to continually
       | ingest new information and update weights in existing models, to
       | push out a new version every week.
       | 
       | Note that I said "obvious", not "easy", because it certainly
       | isn't. In fact it's basically an unsolved problem, and probably a
       | fiendishly difficult one. It may involve more consensus-based
       | approaches like mixture of experts where you cycle out older
       | experts, things like that -- there are dozens of large problems
       | to tackle with it. But if you want to solve this, that's where
       | you should be looking.
        
         | thatguysaguy wrote:
         | Yeah I think every lab would love to do this and the field has
         | been thinking about it forever. (Search lifelong learning or
         | continual learning on Google Scholar). I don't think a
         | technological solution is likely enough that we should pursue
         | it instead of social solutions.
        
         | carlio wrote:
         | While you might be able to continuously update the model, are
         | you able to continuously update the moderation of it? As the
         | article says, it takes time to tune it and filter it; if you
         | allow any content in without some filtering of outputs you
         | might end up with another Tay. You'd have to think the
         | liability would slow down the ability to simply update on the
         | fly.
         | 
         | Also, if the proportion of training data available is larger
         | for more established frameworks, then the ability of the model
         | to answer usefully are necessarily dictated by the volume of
         | content which is biased towards older frameworks.
         | 
         | It might be possible with live updating to get something about
         | NewLibX but it probably would be a less useful answer compared
         | to asking about 10YearOldLibY
        
           | zelphirkalt wrote:
           | Moderation is the real reason it will be difficult to have
           | online learning models in production. I think the technical
           | side of how to do it will not be the biggest issue. The
           | biggest one will be liability for the output.
        
         | perrygeo wrote:
         | Talking to non-technical, but otherwise well-informed people,
         | there is a broad assumption that AIs already "learn" as they're
         | used for inference. IME people are surprised to find training
         | and inference to be entirely separate processes. Human
         | intelligence doesn't have such a stark distinction between
         | learning and applied learning.
        
       | tomduncalf wrote:
       | I was talking about this the other day - to some extent it feels
       | like React (and Tailwind) has won, because LLMs understand it so
       | deeply due to the amount of content out there. Even if they do
       | train on other technologies that come after, there maybe won't be
       | the volume of data for it to gain such a deep understanding.
       | 
       | Also it doesn't hurt that React has quite a stable/backwards
       | compatible API, so outdated snippets probably still work... and
       | in Tailwind's case, I suspect the direct colocation of styles
       | with the markup makes it a bit easier for AI to reason about.
        
       | mncharity wrote:
       | In contrast, I suggest AI could _accelerate_ new tech adoption.
       | 
       | > if people are reluctant to adopt a new technology because of a
       | lack of AI support, there will be fewer _people [emphasis added]_
       | likely to produce material regarding said technology, which leads
       | to an overall inverse feedback effect. Lack of AI support
       | prevents a technology from gaining the required critical adoption
       | mass, which in turn prevents a technology from entering use and
       | having material made for it,
       | 
       | At present. But what if this is a transient? It depends on the
       | new technology's dev team being unable to generate synthetic
       | material. What happens when they can create for themselves a fine
       | tune that translates between versions of their tech, and between
       | "the old thing everyone else is using" and their new tech? One
       | that encapsulates their "idiomatic best practice" of the moment?
       | "Please generate our rev n+1 doc set Hal"? "Take the new _Joe 's
       | ten thousand FAQ questions about topic X_ list and generate
       | answers"? "Update our entries in [1]"? "Translate the
       | _Introduction to Data Analysis using Python_ open-source textbook
       | to our tech "?
       | 
       | The quote illustrates a long-standing problem AI can help with -
       | just reread it swapping "AI support" to "documentation". Once
       | upon a time, releasing a new language was an ftp-able tar file
       | with a non-portable compiler and a crappy text-or-PS file and a
       | LISTSERV mailinglist. Now people want web sites, and spiffy docs,
       | and Stack Overflow FAQs, and a community repo with lots and lots
       | of batteries, and discuss, and a language server, and yes, now
       | LLM support. But the effort delta between spiffy docs and big
       | repo vs LLM support? Between SO and LLM latency? That depends on
       | how much the dev team's own LLM can help with writing it all. If
       | you want dystopian, think lots of weekend "I made my own X!"
       | efforts easily training transliteration from an established X,
       | and running a create-all-the-community-infrastructure-for-your-
       | new-X hook. Which auto posts a Show HN.
       | 
       | AI could at long last get us out of the glacial pace of stagnant
       | progress which has characterized our field for decades. Love the
       | ongoing learning of JS churn? Just wait for HaskellNext! ;P
       | 
       | [1] https://learnxinyminutes.com/ https://rigaux.org/language-
       | study/syntax-across-languages.ht...
       | https://rosettacode.org/wiki/Category:Programming_Languages ...
        
       | j45 wrote:
       | If people are skipping one shelf of tech, and jumping to the next
       | shelf up with only ai trying to cover everything, and are let
       | down, maybe there is an opportunity to share that there may be
       | more realistic offers in the interim to offer across both.
        
       | lasagnagram wrote:
       | No, new tech is just 100% extractive, wealth-generating garbage,
       | and people are sick and tired of it. Come up with something new
       | that isn't designed to vacuum up your data and your paycheck, and
       | then maybe people will be more enthusiastic about it.
        
         | blt wrote:
         | You didn't read the article
        
       | ramoz wrote:
       | We could call this the hamster-wheel theory.
        
       | kristianp wrote:
       | > Claude's artifacts feature
       | 
       | The article mentions that Claude's artifacts feature is
       | opinionated about using react and will even refuse, to code for
       | Svelte Runes. It's hard to get it to use plain JavaScript because
       | react is in the system prompt for artefacts. Poor prompt
       | engineering in claude.
        
       ___________________________________________________________________
       (page generated 2025-02-14 23:00 UTC)