[HN Gopher] Go-attention: A full attention mechanism and transfo...
       ___________________________________________________________________
        
       Go-attention: A full attention mechanism and transformer in pure Go
        
       Author : PaulHoule
       Score  : 70 points
       Date   : 2025-03-03 16:38 UTC (6 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | atomic128 wrote:
       | You can do a lot better than this, by using Go as a JIT code
       | generator, dynamically linking the result, and jumping into it
       | with cgo. Easily saturates the CPU vector math units.
       | 
       | I use exactly this approach for the futures/options prediction
       | transformers on my website.
       | 
       | But I will never "open" another piece of software, now that it's
       | all grist for the LLM code generator industry. Anonymous common
       | property, sold by the LLM companies. No credit to the author.
       | 
       | Why anyone opens any software anymore is a mystery to me. We are
       | witnessing the greatest theft of intellectual property in the
       | history of Man.
        
         | ncruces wrote:
         | Because some people just don't care where their code ends up.
         | 
         | Many people release code to the "public domain" (or under
         | _very_ liberal licenses). If those never cared if corporate
         | entity(tm) used it in proprietary software, why should they
         | care if LLM chews on it and regurgitates it out?
         | 
         | Also, it's far worse if entitled user(r) posts abusive issues
         | to my repo, than if they copy snippets of my code through a LLM
         | and are forced to support their inferior spitballed copy all by
         | themselves.
        
           | csdvrx wrote:
           | > Because some people just don't care where their code ends
           | up.
           | 
           | Yes, take me for example.
           | 
           | > Many people release code to the "public domain" (or under
           | very liberal licenses).
           | 
           | In my case, the MIT license, because I saw it was popular,
           | and I was afraid that in some places, "public domain" might
           | cause unexpected legal issues to whoever wants to "play by
           | the book" and use my code.
           | 
           | > if LLM chews on it and regurgitates it out
           | 
           | As work coming from a machine does not have copyright
           | protection, whoever gets a LLM to spit out my code back can
           | then claim it as their own, under whatever term they like.
           | 
           | If this person wants to contribute to a free software project
           | and release the code under the GPL v2 or v3, good: it may
           | help create a new feature that users will enjoy!
           | 
           | If this person wants to contribute to their company private
           | software that's only available on a subscription basis (and
           | let's say the subscription is sold at an eye-watering price),
           | good: it means whoever pay this subscription will get more
           | from their money, and whoever use the software may get a new
           | feature they will enjoy!
           | 
           | Software has nearly 0 marginal costs. LLM is the closest
           | thing to a Star-Trek level "replicator", getting everyone
           | everything they want.
           | 
           | On which moral grounds would you object to a Star-Trek level
           | replicator for physical good? (please make them good, as
           | offering any food anyone may want would fix world hunger once
           | and for all)
           | 
           | Then why object to that for virtual goods?
           | 
           | Maybe I'm reading too much into your reply, but I don't see
           | it as trolling or negative faith.
           | 
           | I see variants of it in many places, and they all look to me
           | very close to luddism: rejecting a new technology, because
           | you fear for your own work, while ignoring what this
           | technology will enable in the greater picture: for the
           | orignal case of luddism, reducing the price of clothing for
           | everyone by increasing production and decreasing labor,
           | therefore allowing workers to get in other fields where they
           | may try to satisfy other human wants - some that would be
           | inconcievable to the original luddites like videogames
           | 
           | We should feel graceful we get more technology, as it removes
           | constraints and make more people happy.
        
             | hnlmorg wrote:
             | I don't think fearing one's job is necessarily a bad reason
             | because as much as I love the idea of a Star Trek utopia,
             | real and present people have real responsibilities like
             | children which are cared for from money generated by their
             | careers.
             | 
             | This is particularly relevant in societies which take a dim
             | view of their social responsibilities (I'm looking at you
             | America) which means there's less of a safety net should
             | that career disappear.
             | 
             | We are already seeing more developers than job vacancies is
             | the tech market, so this isn't a theoretical concern
             | either.
             | 
             | That all said, I don't think hiding our valuable code for
             | fear of LLMs is the right solution either. If your code is
             | really that good then you'll be more likely to secure your
             | career by sharing your code because it builds a visible
             | reputation that extends further than any verbiage on a CV
             | might.
             | 
             | So while I don't agree with the LLM excuse I can still
             | completely understand why someone might cite it as a reason
             | not to open their source.
             | 
             | Another valid reason is that some people have been
             | completely burnt out deadline with entitled complaints from
             | users. Thankfully I've had a mostly positive experience
             | personally but I've read that others haven't been so
             | fortunate.
        
               | csdvrx wrote:
               | > I'm looking at you America
               | 
               | And I'm looking back at you from America :)
               | 
               | > We are already seeing more developers than job
               | vacancies is the tech market, so this isn't a theoretical
               | concern either.
               | 
               | Agriculture also employs far fewer people than a few
               | hundred years ago, yet we have more food in quantity and
               | diversity, so I see that as a good thing.
               | 
               | I suppose we just have very different beliefs and values.
               | 
               | Thanks for your answer, as it helped me understand your
               | perspective.
        
               | hnlmorg wrote:
               | I think you've misread my comment. I'm neither the GP nor
               | against LLMs. I'm just offering a counterpoint that a
               | fear for one's job isn't an unreasonable perspective.
        
             | TeMPOraL wrote:
             | > _On which moral grounds would you object to a Star-Trek
             | level replicator for physical good? Then why object to that
             | for virtual goods?_
             | 
             | This just made me realize a distressing thing - if we ever
             | built a replicator, a lot of people might then want to
             | destroy it. For the same reason I believe they object to
             | LLMs - _greed and entitlement_. Because they don 't get to
             | benefit personally, they don't get first right to refuse,
             | the instinct is to deny the value to others. The Dog in the
             | Manger.
        
               | MyOutfitIsVague wrote:
               | I use LLMs and consider them quite useful, but I think
               | that characterization of detractors is very disingenuous.
               | People don't object to LLMs out of greed and entitlement.
               | People object to LLMs because the copyright and IP
               | systems in most of the modern world have equated copying
               | with theft for so long, complete with the threat of legal
               | action and even prison sentences. This system was said to
               | be there to keep people fed and employed. Suddenly, when
               | giant companies have billions of dollars to gain by
               | ignoring copyright, they are allowed to. We've lived in a
               | couple generations where giant companies have been able
               | to completely own and control our culture, which should
               | belong to the people.
               | 
               | People object to modern AI because it's another glaring
               | sign that capital doesn't care about human life, and the
               | people who own the capital largely don't either. They
               | will use that as a PR angle until it's not useful
               | anymore, and then proudly say the opposite when it suits
               | them. It's flagrant hypocrisy.
               | 
               | I believe that if we had sane copyright terms and limits
               | so we were actually entitled to use and share our own
               | culture and media as we see fit, and better social safety
               | nets so people whose jobs become outmoded didn't have to
               | worry about losing their homes and having their families
               | go hungry, very few people would be actually against
               | LLMs.
        
             | ncruces wrote:
             | > Maybe I'm reading too much into your reply, but I don't
             | see it as trolling or negative faith.
             | 
             | Maybe you are. All my repos are either MIT (where I'm a
             | _little_ proud, and would appreciate the acknowledgement -
             | though realistically, I 'd never sue anyone over it) or
             | MIT-0.
             | 
             | So yeah, if it ends up in a LLM, and people copy it, great.
             | Less "please give me free support" requests coming my end.
        
         | csdvrx wrote:
         | > Anonymous common property, sold by the LLM companies. No
         | credit to the author.
         | 
         | Yes, and?
         | 
         | > Why anyone opens any software anymore is a mystery to me.
         | 
         | Before LLM: to share nice things with other people who may like
         | them
         | 
         | After LLM: to share nice things with other people who may like
         | them
         | 
         | > We are witnessing the greatest theft of intellectual property
         | in the history of Man.
         | 
         | Yes, and?
         | 
         | Before LLM, anyone could already take free software and copy-
         | paste the code while stripping away the author's name or the
         | license.
         | 
         | There are more motivations than getting credit, or money.
         | 
         | If what I create can ultimately have a positive impact, it does
         | not matter whether the credit goes to the LLM, to me or anyone
         | else.
         | 
         | I would suggest you question and analyze your motivations.
        
           | throwaway0123_5 wrote:
           | > I would suggest you question and analyze your motivations.
           | 
           | Full disclosure up front: I'm not anti-AI and think there are
           | a lot of potential positive applications. I currently work on
           | improving some aspects of AI.
           | 
           | That said, it _really_ isn 't that hard to see why many
           | people (including me to some extent) are nervous about AI
           | getting better, and upset that the people who made the
           | content that facilitated this are not benefiting from it.
           | 
           | If you're a SWE and not already extremely stable financially,
           | AI replacing software engineering (and _if_ that happens,
           | ~all white-collar labor) means that you 'll likely be unable
           | to achieve anything close to the economic status expected and
           | planned on.
           | 
           | > Before LLM, anyone could already take free software and
           | copy-paste the code while stripping away the author's name or
           | the license.
           | 
           | Before LLM, _people_ doing this posed ~0 existential threat
           | to the livelihood of the author. AI doing it at a massive
           | scale arguably does (remains to be seen, but at least
           | plausible).
           | 
           | I think most people want their work to have a positive
           | impact, and aren't _that_ concerned about credit. But they
           | aren 't confident that helping LLMs will lead to long-term
           | positive impacts, and in any case are less worried about that
           | than not being able to own a house or maybe even buy
           | necessities because of the possible economic impacts of AI.
           | 
           | Is this more of an indictment of our current social/economic
           | system than of AI itself? Probably yes, but I'm not sure it
           | matters.
        
             | csdvrx wrote:
             | > you'll likely be unable to achieve anything close to the
             | economic status expected and planned on
             | 
             | It may be a problem of expectations + loss aversion, as
             | high income from software jobs is a historically "recent"
             | phenomenon
             | 
             | > AI doing it at a massive scale arguably does (remains to
             | be seen, but at least plausible).
             | 
             | I agree that scale can change many things, but the core
             | argument is still someone fearing for their job
             | 
             | > Is this more of an indictment of our current
             | social/economic system than of AI itself? Probably yes, but
             | I'm not sure it matters.
             | 
             | I just want to better understand why so many people here
             | have such a negative take on a technology that is new,
             | exciting, and could bring so much to so many people!
             | 
             | Thanks for your detailed reply: like you I believe it's
             | just self-interest, and "It is difficult to get a man to
             | understand something, when his salary depends on his not
             | understanding it"
        
               | throwaway0123_5 wrote:
               | > high income from software jobs
               | 
               | I don't think it is a worry about just software jobs
               | though. Would people be nearly as concerned if some
               | "Technology X" would delete all software jobs but leave
               | all other white-collar jobs intact? Probably not nearly
               | as much, just retrain and go do some other job. I think
               | the concern is that once an AI can replace SWE, it is
               | likely already at the level where in short order there
               | will be _no_ value to intellectual labor, _no_ "thinking
               | jobs" that can sustain a good life.
               | 
               | So basically I don't think it is just self-interest. I
               | think a lot of people see a plausible outcome being the
               | destruction of the value of (at least intellectual)
               | labor. If labor loses all value and society isn't
               | prepared to entirely upend our economic system, _most_
               | people could suffer greatly, to the extent where the
               | positive and exciting benefits of AI don 't really mean
               | much. If someone believes that not making their software
               | open-source can delay that, it isn't necessarily a
               | selfish decision. If that is what you believe, you're
               | delaying a negative outcome for a lot of people.
        
         | ninininino wrote:
         | I suppose there's a different angle, which is that the Open
         | community can distill the privately trained models and then
         | open the distilled model in turn, like many believe Deepseek
         | did. In effect, letting private corps pay for expensive
         | training (w/o paying the authors of the data they are training
         | on, as you correctly point out), but then benefiting from their
         | training labor/cost by copying it back to the open community
         | and making it free again.
        
           | pona-a wrote:
           | That does make me optimistic. "Stealing back" our stolen data
           | does in the end a free model make -- unless the current,
           | very... unprecedented US admin decides distributing
           | unauthorized distilled models carries a prison sentence.
           | 
           | But I think most of it is psychological. There used to be
           | goodwill between the public and NLP researchers: what
           | heartless monster would object to some linguists using the
           | by-product of their conversations to make a computer learn
           | that a "king - man + woman = queen" or generate some
           | unintentionally comedic writing?
           | 
           | Now this honeymoon is over. You see that what you've been
           | feeding your public life is now a monster with a hundred
           | vices and a few good deeds. It is behind the tidal wave of
           | spam and misinfo, it is the oracle breeding ignorance among
           | the gullible, it is the iron hand of censorship for many a
           | police state, but most insulting of all, it is sold by its
           | makers as a replacement for any genuine talent or minimal
           | human effort.
           | 
           | "Why learn to draw when you can have an AI produce a cheap
           | imitation instead? Why learn math, CS, or foreign languages
           | when you can delegate any and all thinking to the great
           | machine? What did we even have you all for, anyway --
           | intellectuals, artists, and craftsman -- with your constant
           | complaining and demands we learn a skill? Who do they think
           | they are? Experts?"
           | 
           | No, the future belongs to the lazy and talentless, to thieves
           | and usurpers, who will sit at the top with an aristocratic,
           | borderline catatonic, brainlessness, while you will be at
           | their knees, polishing their boots -- since the machine to do
           | so costs an order more than your meager wage.
           | 
           | It is anti-intellectualism in a form purified to industrial
           | potency, directed at the very people by whose generosity
           | their rather inept "replacements" were manufactured.
           | 
           | I can't say what's the rational response in all this. I can
           | tell you what emotional response seems most appealing.
        
         | sph wrote:
         | > But I will never "open" another piece of software, now that
         | it's all grist for the LLM code generator industry.
         | 
         | This is an interesting, albeit offtopic, discussion. My last
         | few projects are still stored as private repos because I do not
         | want them to be gobbled up by LLMs for junior, expendable and
         | cheaper devs to replace me, especially when I am exploring
         | novel problem-spaces. In fact, I do not care to be convinced
         | otherwise, I am ideologically speaking completely opposed to
         | any form of LLM or "AI".
         | 
         | I daydream of a niche, nerdy network of nerds, outside the
         | world wide web, to share my stuff with humans. Until then, I am
         | not sure whether my projects should be open-source. The ones
         | that will benefit the most are the idiotic machines and their
         | operators.
         | 
         | Should we resurrect Gopher for a few months, until they catch
         | up?
        
           | csdvrx wrote:
           | Your perspective is very strange, as I want to be replaced by
           | cheaper devs - or ideally, even machines, which would free us
           | humans to work on other problems that machine can't work on
           | yet.
           | 
           | > In fact, I do not care to be convinced otherwise, I am
           | ideologically speaking completely opposed to any form of LLM
           | or "AI".
           | 
           | Then could you try to convince me of your argument?
           | 
           | I don't see tech problems as worse or better than other
           | problems like growing food, which we were fortunate to solve
           | with industrialized farming and other technological
           | breakthroughs like the Haber-Bosch process
           | 
           | Explain me why I should join your daydream!
        
             | sph wrote:
             | > which would free us humans to work on other problems
             | 
             | We have heard this refrain since the start of the
             | Industrial Revolution.
        
               | hnlmorg wrote:
               | And it's been true. I don't have to break my back toiling
               | the earth to grow crop. Nor risk other kinds of life
               | changing injuries working in factories, down the mines,
               | etc.
               | 
               | Instead, I've got a comfortable job which requires using
               | my brain. Which is an opportunity that someone from my
               | social class wouldn't have been granted even just 100
               | years ago.
        
               | sph wrote:
               | We were talking about machines freeing _time_ so we can
               | do other things, do not move the goalposts to fit your
               | argument.
               | 
               | Yes, I do not have to break my back 5 hours a day in the
               | field, I only have to sit in an office 8 hours, and 2
               | hours commute a day. Also make sure you check your Slack
               | notifications at home. [1] I hope you enjoy your couple
               | hours in the weekend to read philosophy and paint
               | landscapes.
               | 
               | 1: in fact I don't waste all my time working like most,
               | but that makes me unhireable to 99% of companies that
               | just want to squeeze every productive minute out of me.
        
               | ncruces wrote:
               | Let's all go back to doing the laundry by hand. Because
               | life was better back then.
        
               | hnlmorg wrote:
               | > We were talking about machines freeing time so we can
               | do other things, do not move the goalposts to fit your
               | argument.
               | 
               | That's a very uncharitable comment given "time" wasn't
               | mentioned once in either your comment nor the GPs.
               | 
               | You might have read their comment as meaning "less work"
               | (time) but for me it reads that they're talking about
               | "different and more interesting work".
               | 
               | Both interpretations are valid.
               | 
               | I get a sense from your rant that you're massively
               | overworked. I do sympathise with you there, I honestly
               | do. But that's got nothing to do with AI.
        
               | MyOutfitIsVague wrote:
               | The biggest use of professional AIs will offset the
               | majority of big brain jobs, possibly leaving the majority
               | of jobs left for humans to do being physical back-
               | breaking labor at some point before long.
        
               | hnlmorg wrote:
               | We've long since proven that machines are better for
               | back-breaking labor.
               | 
               | I also disagree that AI will offset the majority of big
               | brain jobs. What's actually happening is AI is offsetting
               | the majority of narrowly defined queries. The more open
               | ended problems require an actual understanding of problem
               | solving rather than a really clever text prediction
               | engine.
               | 
               | Some examples to illustrate my point:
               | 
               | If you wanted to build a mobile app, you'd need to first:
               | 
               | - understand the problem you're trying to solve (is it a
               | game? A medical app? A productivity app? Etc)
               | 
               | - choose a frontend stack
               | 
               | - create a design language
               | 
               | - create wireframes
               | 
               | - decide where to host the backend stack (eg self hosted?
               | Or public cloud? If so, which?)
               | 
               | - decide which cloud services to use (if public cloud)
               | 
               | - decide on how to approach IaC
               | 
               | - decide on the backend stack
               | 
               | - decide on how to implement your CI/CD pipelines (or
               | even if you want continuous delivery)
               | 
               | - understand what privacy and security concerns you have
               | and identify what risks you're willing to accept
               | 
               | - define testing strategy's
               | 
               | - define release cadences
               | 
               | And so on and so forth.
               | 
               | Writing the code is often actually the easiest part of
               | software development because we already understand the
               | problem by that point. The rest of the work is figuring
               | out all questions that need to be answered -- and I don't
               | even mean _answering_ the questions, I mean understanding
               | what questions to ask.
               | 
               | AI can't do that. GenAI requires human input and much of
               | software development is actually figuring out what those
               | inputs should be rather than generating that output.
               | 
               | So that's what I mean by "big brain jobs". It's not
               | writing code, because that's easy. It's understanding and
               | defining those requirements to begin with.
        
           | pizzafeelsright wrote:
           | I long for your confidence in your own ingenuity. If you
           | think private repos on third party systems are hidden from
           | training I suppose you have more trust in expendable
           | developers working to maximize training.
           | 
           | At this point I am fairly convinced that software development
           | is solved. The reality has not been understood by most as
           | typing syntax is now a trivial task best left to AI.
        
             | sph wrote:
             | I have been migrating my private repos to a private Gitea
             | instance of my own. I am well aware that anything on Github
             | is used to train Copilot.
             | 
             | > At this point I am fairly convinced that software
             | development is solved.
             | 
             | Writing code is 20% of software development, which is why I
             | am still in demand even if I refuse to use LLM software.
             | But employers and recruiters are not rational, and will
             | soon rather hire cheap, expendable "monkeys on a
             | typewriter" than experienced engineers.
        
             | imtringued wrote:
             | I wasted at least an hour today on waiting to anti virus
             | software to let my software start.
             | 
             | No amount of AI will overcome the slowness of antivirus
             | software.
             | 
             | Also, about software development being "solved". I beg to
             | differ.
             | 
             | Sakana AI failed to reliably produce working CUDA kernels
             | and that was in a domain where the task was explicitly
             | specified as pytorch code, so it was, in a way, just a
             | glorified syntax typing demonstration.
        
           | hu3 wrote:
           | > My last few projects are still stored as private repos
           | because I do not want them to be gobbled up by LLMs for
           | junior, expendable and cheaper devs to replace me, especially
           | when I am exploring novel problem-spaces.
           | 
           | I'll try to be gentle, pardon me in advance.
           | 
           | 1) Your problem space is probably not novel enough to warrant
           | such preciousness.
           | 
           | 2) "Junior, expendable and cheaper devs" don't compete with
           | Senior+ because they don't know how to even ask the right
           | questions. And they don't possess enough soft skills to
           | navigate hard projects.
           | 
           | 3) But let's suppose that you do indeed have IP that's more
           | valuable to humanity than the CRUD crap we are paid to spit
           | out on a daily basis: We will all die, and it can happen
           | anytime now, why risk having your contribution die with you?
           | Even if you setup posterous access to other people, there's
           | risk no one cares enough to dig your code. Heck, this is what
           | PhDs are about, pushing humanity knowledge ceiling just a bit
           | higher. And even those that do, barely get enough attention.
           | 
           | 4) Fighting AI is like trying to boil the ocean. We won't
           | make a difference.
        
           | pona-a wrote:
           | Gemini was looking quite nice the last time I've been there.
        
         | badsectoracula wrote:
         | > Why anyone opens any software anymore is a mystery to me.
         | 
         | Because i open my software to be useful to others, including
         | others that may benefit from my code indirectly via LLM being
         | trained on it. If anything, just recently i was thinking of how
         | to make a documentation generator system to generate documents
         | in a format that'd be easier for LLMs to "grok" so that people
         | can feed it to an LLM and ask questions about it.
         | 
         | I'd advocate for using a local LLM instead though, they may not
         | be technically as good as the cloud stuff you rent, but they
         | are good enough, can run on most mid-to-high-end PCs and you
         | are in control.
        
         | PaulHoule wrote:
         | Marshall McLuhan in the 1960s said that technology and culture
         | are moving so fast that we are "driving by looking in the rear
         | view mirror"
         | 
         | Nothing says "I am slow on the draw" to me than "all of a
         | sudden I'm worried about getting ripped off by Open AI" as the
         | open source and web economies have long been recognized as
         | exploitative
         | 
         | (1) This guy http://www.seobook.com/blog has been talking about
         | how the Google economy has been rigged since at least 2010, and
         | I can say that I've lived it.
         | 
         | (2) This cartoon https://xkcd.com/2347/ illustrates the hard
         | time open source has sustaining itself. Open-source doesn't
         | need to fund all the value-subtracting vultures that you need
         | to sell enterprise software, but it struggles to scrape
         | together just a few bucks for people who are capable of doing
         | work on a shoestring.
         | 
         | (3) Open source licenses have been getting tougher for database
         | products in particular because hosting a database like mysql is
         | a great business for the likes of AWS or Azure which doesn't
         | need to send a penny back to the creators and the GPL's
         | copyleft doesn't do anything about it.
         | 
         | ---
         | 
         | I'd say also as a creative person everything I'm exposed to
         | becomes part of my work, particularly when I am receptive and
         | "on the make"; I think of the video for Groove Armada's
         | _Superstylin '_ [1] which is all about a person seeing
         | everything in the environment and finding inspiration and using
         | their talents to make other people's talents even greater.
         | Don't laugh but my son and I got a lot of out the anime _2-5 d
         | Seduction_ [2] because it is all about people of different
         | generations, under pressure, figuring out how to blend their
         | talents to compete _and_ cooperate. So much of what I consume
         | nourishes me, becomes part of me, and becomes part of what I
         | create, not any different from a LLM.
         | 
         | [1] https://www.youtube.com/watch?v=_kE0pxRkMtQ
         | 
         | [2] https://en.wikipedia.org/wiki/2.5_Dimensional_Seduction
        
         | zbobet2012 wrote:
         | It depends on how long the time you spend in your c function
         | is. cgo has a substantial overhead for calling. I tend to
         | prefer just writing ASM functions for critical path code. You
         | can use libraries like https://github.com/mmcloughlin/avo to
         | make it easier to write/maintain.
        
         | EGreg wrote:
         | Because we live in a society where the incentives are not
         | conducive to collaboration.
         | 
         | Imagine if everyone had a UBI, and you contributed fixes and
         | improvements to a project because you thought it was cool.
         | That's how professors with tenure for centuries rushed to
         | publish scientific articles, for instance.
         | 
         | In fact, in antiquity, we had the opposite problem to what you
         | describe ... often things were credited to a certain leader of
         | a school (e.g. Pythagoras) even if he didn't invent them.
         | 
         | The problem is that people are thinking about how to make a
         | profit, and it's ubiquitous because it's wrapped up with trying
         | to survive, at all. By the time you make millions of dollars,
         | you still have the PTSD from trying to outrun the bear, you
         | keep going by inertia and making billions.
         | 
         | Competition is far less healthy than collaboration. It's what
         | causes the countries on earth to speed towards AI apocalypse
         | for instance, or fossil fuel apocalypse, etc. The few times
         | they cooperated (e.g. Montreal Protocol, or nonproliferation of
         | chemical weapons) has been a resounding success. I doubt
         | they'll be able to come together like that for AI, though!
        
         | umvi wrote:
         | The software I open is usually a gift to humanity/public
         | service. I'm not seeking to gain anything. Anyone can use it
         | for anything - for profit or not.
        
           | diggan wrote:
           | Or put the way I usually say it, in completely normal
           | conversations:
           | 
           | > free of charge, to any person obtaining a copy of this
           | software, to deal in the Software without restriction,
           | including without limitation the rights to use, copy, modify,
           | merge, publish, distribute, sublicense, and/or sell copies of
           | the Software
        
         | timewizard wrote:
         | > Why anyone opens any software anymore is a mystery to me.
         | 
         | Cynically? To get noticed and get a job.
         | 
         | > We are witnessing the greatest theft of intellectual property
         | in the history of Man.
         | 
         | What makes you think they weren't already doing it? It could be
         | that LLMs masquerading as "AI" is actually just a risk
         | reduction measure for already existing practices.
        
       | neonsunset wrote:
       | Just scalar code? I was hoping to see some Goasm here for
       | acceptable performance (or you could rewrite it in F#/C# which
       | provide appropriate SIMD primitives).
       | 
       | edit: to answer my own question, when inspected with Ghidra, this
       | implementation indeed compiles to very slow scalar code (operates
       | on single fp64 values).
        
       | stpedgwdgfhgdd wrote:
       | Great to see these algorithms in Go. Finally I can study them at
       | the implementation level as opposed to reading blogs.
        
       ___________________________________________________________________
       (page generated 2025-03-03 23:01 UTC)