Post AhgOoTqBaydahnZqGu by Cara@miruku.cafe
(DIR) More posts by Cara@miruku.cafe
(DIR) Post #AhdD6QHzkyBJylUhAe by stux@mstdn.social
2024-05-07T00:43:28Z
0 likes, 0 repeats
@ben well done!đđťđ¸âĽď¸
(DIR) Post #AhdTvu8i8FzCaqinku by ben@m.benui.ca
2024-05-06T22:29:36Z
7 likes, 7 repeats
Stack Overflow announced that they are partnering with OpenAI, so I tried to delete my highest-rated answers.Stack Overflow does not let you delete questions that have accepted answers and many upvotes because it would remove knowledge from the community.So instead I changed my highest-rated answers to a protest message.Within an hour mods had changed the questions back and suspended my account for 7 days.
(DIR) Post #AhdTvwKpy8sHP2PVw0 by ben@m.benui.ca
2024-05-06T22:29:56Z
1 likes, 0 repeats
I'm requesting that my questions and answers be permanently deleted under GDPR.
(DIR) Post #AhdTvz3ZqlqnqNMGwq by ben@m.benui.ca
2024-05-06T22:31:26Z
3 likes, 0 repeats
It's just a reminder that anything you post on any of these platforms can and will be used for profit. It's just a matter of time until all your messages on Discord, Twitter etc. are scraped, fed into a model and sold back to you.
(DIR) Post #AhdZCZZaLoSfV83jxw by bananarama@mstdn.social
2024-05-07T03:16:42Z
0 likes, 0 repeats
@atatassault @intransitivelie @jlsigman @martin_piper @ben This isn't an LLM using data without attribution lol
(DIR) Post #AhdZCZzSnbA4nOAPIm by martin_piper@mastodon.social
2024-05-07T03:22:33Z
0 likes, 0 repeats
@bananarama @atatassault @intransitivelie @jlsigman @ben that is transformative which is in the agreement
(DIR) Post #AhdZCaYCiQx0X8Q9S4 by bananarama@mstdn.social
2024-05-07T03:24:48Z
0 likes, 0 repeats
@martin_piper @atatassault @intransitivelie @jlsigman @ben https://creativecommons.org/licenses/by-sa/4.0/Oops, still requires attribution. Try again.EDIT: Also notice, it is share-alike.
(DIR) Post #AhdZCavbJRfLhhMpv6 by martin_piper@mastodon.social
2024-05-07T03:46:36Z
0 likes, 0 repeats
@bananarama @atatassault @intransitivelie @jlsigman @ben and how do you know there won't be attribution? You're just assuming that.
(DIR) Post #AhdZCbHZzjFMnreOB6 by bananarama@mstdn.social
2024-05-07T03:48:23Z
0 likes, 0 repeats
@martin_piper @atatassault @intransitivelie @jlsigman @ben I can't prove a negative, try again.To anyone reading this after the fact: the evidence is that LLMs are notoriously bad for hallucinating attribution. There would need to be some pretty major changes to get attribution to work accurately and reliably, and this doesn't even cover the share-alike licensing issues.
(DIR) Post #AhdZCbpFyWBYUJPHfc by martin_piper@mastodon.social
2024-05-07T03:48:47Z
0 likes, 0 repeats
@bananarama @atatassault @intransitivelie @jlsigman @ben you're the one claiming there isn't attribution...
(DIR) Post #AhdZCcMDzwYa8Ypc3c by Archivist@social.linux.pizza
2024-05-07T04:51:15Z
0 likes, 0 repeats
@martin_piper @bananarama @atatassault @intransitivelie @jlsigman @benThe number of commercial LLMs or generative AIs in general that do attribution of their sources as licenced is currently 0. The entire industry have been able to get away with it for now several years. Do you expect stackexchange to be radically different in a positive way and not communicate about it?
(DIR) Post #AhddoSeOw5CMqvdMoq by martin_piper@mastodon.social
2024-05-06T23:47:54Z
0 likes, 0 repeats
@ben you gave them information for free. You don't own it, they do. That was the working relationship.Imagine if you were working for a company producing work and you suddenly tried to sabotage that work. That's what you were trying to do, sabotage it. They would be perfectly within their rights to restrict your access.The moral of this story is, if you want to retain ownership then don't give it away (for free) to someone else.
(DIR) Post #Ahdmrlx8WhKXAp3E6y by shalien@mastodon.projetretro.io
2024-05-07T07:24:24Z
0 likes, 0 repeats
@ben Yeah, Did the same thing directly.
(DIR) Post #AhdtA7MmFWeQWYAvWC by hunterhacker@mastodon.social
2024-05-07T05:44:54Z
1 likes, 0 repeats
@ben Why do people care if someone like me gets your excellent answer to a coding question by typing my error message into Google (forwarding to SO) or into ChatGPT? In neither situation were you getting paid. In both situations the middle man makes a buck. In both situations Iâm thankful you spent time helping me.Is it that with ChatGPT I donât know who to thank?
(DIR) Post #AhdyeiUNID527F8fdQ by cody@catboy.baby
2024-05-07T09:36:29.691Z
0 likes, 0 repeats
@ben@m.benui.ca YES! Let's fucking go! More people NEED to do this.
(DIR) Post #Ahe6A1RhYpVrZ8lODI by aral@mastodon.ar.al
2024-05-07T11:00:06Z
0 likes, 1 repeats
@ben Theyâre not yours, theyâre theirs. Jeff Atwood thanks you for your free labour. (Iâm kidding, he doesnât. Feel grateful he even allowed you to contribute in the first place, serf.)Speaking of Jeff Atwood, isnât he the guy helping fund Mastodon now? đ¤ #SiliconValley #PeopleFarming #JeffAtwood #surveillance #capitalism #AllYourDataAreBelongToUs
(DIR) Post #Ahe7xJ0f9BkABPBE1o by Cefr@beige.party
2024-05-07T11:20:07Z
0 likes, 0 repeats
@aral @ben Thanks for that. I've stopped following him. Also, curious to know if Discourse installs not under SO Network also co-op the same rights to user content, because if they do... I imagine a lot of self hosted installs are going to be replaced soon.
(DIR) Post #AheAh7zeda5JFIgakS by vitor@hachyderm.io
2024-05-07T11:50:52Z
0 likes, 0 repeats
@aral @ben Jeff left Stack Exchange over a decade ago. Whatâs the sense in criticising him for this?
(DIR) Post #AheFIewZUXQnyEObEe by aral@mastodon.ar.al
2024-05-07T12:42:27Z
0 likes, 0 repeats
@vitor @ben Did he? (I canât say I follow the every move of every Silicon Valley tech bro.)
(DIR) Post #AheFXUr3HhfF1WtBMO by aral@mastodon.ar.al
2024-05-07T12:44:55Z
0 likes, 0 repeats
@vitor @ben Updated; thanks.
(DIR) Post #Ahf6sr11NiKFPEcZo8 by ben@m.benui.ca
2024-05-07T18:08:18Z
1 likes, 1 repeats
Thank you for the replies. As someone pointed out, anything posted on Stack Overflow is covered by CC BY-SA 4.0.Under this license all usage must attribute the author and must have a similar license. Neither of which OpenAI fulfills.
(DIR) Post #AhfhIvFXrFUePw90Jk by gentoobro@gleasonator.com
2024-05-08T05:31:30.141142Z
0 likes, 0 repeats
@ben Well, start coding. Use a lean, fast language too. That way people can run it on a small VPS.
(DIR) Post #AhgNzPDl0oiTbnFRp2 by Jain@blob.cat
2024-05-08T13:29:44.322468Z
0 likes, 0 repeats
:blobcatgrin: Will Stack Overflow destroy itself?
(DIR) Post #AhgOoTqBaydahnZqGu by Cara@miruku.cafe
2024-05-08T13:38:59.353Z
4 likes, 1 repeats
@ben@m.benui.ca sounds fair? you volunteered information to the public and gave them the right to host it, don't see why they couldn't keep it as it was?
(DIR) Post #AhgPx3aYfYHNqXv9SC by mighty_orbot@retro.pizza
2024-05-07T00:16:18Z
2 likes, 1 repeats
@ben Stack Overflow has already been monetizing your answers with ads for years. If âused for profitâ is your main complaint, youâre a little late.
(DIR) Post #AhgX7y6C7TW1gWruNs by pavel@social.kernel.org
2024-05-07T08:25:28.842574Z
0 likes, 0 repeats
@ben Play stupid games, win stupid prices. Why does everyone believe that sabotaging LLM development is cool?
(DIR) Post #AhgX7zRV7k9XqtqREW by vbabka@social.kernel.org
2024-05-07T09:30:38.124246Z
0 likes, 0 repeats
@pavel @ben it's not? :(
(DIR) Post #AhgX80TJIVC72nhOgy by pavel@social.kernel.org
2024-05-07T10:40:19.707463Z
0 likes, 0 repeats
@vbabka @ben Its not. Using LLM to answer questions might not be good idea, but they should work rather well at translations, including translations between programming languates.
(DIR) Post #AhgX819UlewF9dR5Rg by vbabka@social.kernel.org
2024-05-07T10:46:17.047107Z
0 likes, 0 repeats
@pavel @ben translations are fine but not so sure about the programming languages part. Also, disagreement about using one's own content (created before LLMs took off) for LLM training is not the same thing as sabotaging, IMHO.
(DIR) Post #AhgX821NXK2Xqke7Qu by ljs@social.kernel.org
2024-05-07T10:59:31.509087Z
0 likes, 0 repeats
@vbabka @pavel @ben hint: LLMs have no understanding of anything, so absolutely aren't suited to programming since they'll hallucinate in (often) subtle ways that fits the syntax and people are notoriously bad at picking up on it.Also they still work without credit/license etc. The fact they appear to work for a lot of programming situations makes it even more dangerous.It'd be one thing if people were just using them but acknowledging their limitations, it's quite another in a world where people openly lie about their capabilities.Totally and completely appropriate to not want your work part of it.
(DIR) Post #AhgX82a7S9pTaUtraC by pavel@social.kernel.org
2024-05-07T15:34:49.413091Z
0 likes, 0 repeats
@ljs @vbabka @ben Hint: try it. It saved work for me.
(DIR) Post #AhgX83AHHikjOdojwW by ljs@social.kernel.org
2024-05-07T16:39:29.928898Z
0 likes, 0 repeats
@pavel @ben @vbabka sigh you're disappointing me man.But like all LLM proponents (just like all crypto guys I spoke to before, just like all anti vax guys I spoke to before, just like all [insert religious-style belief] proponents I spoke to before) you won't actually rebut what I say, you'll just assume that 'I don't get it' on some level.I have tried LLMs dude, thanks for patronising me by assuming I haven't.Unfollow.
(DIR) Post #AhgX83gXLmYb0guVE0 by pavel@social.kernel.org
2024-05-08T08:40:59.263981Z
0 likes, 0 repeats
@ljs @ben @vbabka Well, your arguments were a bit disappointing, too. LMs are useful for trivial tasks, and for easy tasks where you can verify the result. I do both kinds of tasks from time to time.
(DIR) Post #AhgX84KawqbF0veUfA by ljs@social.kernel.org
2024-05-08T08:45:26.205877Z
0 likes, 0 repeats
@pavel @ben @vbabka the ones so disappointing you entirely ignored them (because I guess it's beneath you to rebut them) and just said 'try it' as if I hadn't?LLMs have uses, I disagree with their use for tasks like programming for the reasons previously stated that you ignored so not going to repeat.
(DIR) Post #AhgX84xEdBVYwljLtI by ptesarik@fosstodon.org
2024-05-08T09:36:51Z
0 likes, 0 repeats
@ljs @ben @pavel @vbabka LLMs often turn one type of work (create) into another type of work (review), consuming lots of energy in the process. For some people, it may be worth it (although if they had to pay the full costs of LLMs, humans might still be cheaper).
(DIR) Post #AhgX85d47ey72VIl5k by ljs@social.kernel.org
2024-05-08T09:46:04.976596Z
0 likes, 0 repeats
@ptesarik @ben @pavel @vbabka the big problem is that people are very very bad at picking up on the kind of errors that an algorithm can generate.We all implicitly assume errors are 'human shaped' i.e. the kind of errors a human would make.An LLM can have a very good grasp of the syntax but then interpolates results in effect randomly as the missing component is a dynamic understanding of the system.As a result, they can introduce very very subtle bugs that'll still compile/run etc.People are also incredibly bad at assessing how much cost this incurs in practice.Having something that can generate such errors for only trivial tasks strikes me as being worse than having nothing at all.And the ongoing 'emperor's new clothes' issues with LLMs is this issue is insoluble. Hallucination is an unavoidable part of how they work.The whole machinery of the thing is trying to infer patterns from a dataset, so at a fundamental level it's broken by design.That's before we get on to the fact it's needs human input to work (you start putting LLM generated input in it completely collapses), so the whole thing couldn't work anyway on any long term scale.That's before we get on to the fact it steals software and ignores license, the carbon costs and monetary costs of compute, and a myriad of other problems...The whole problem with all this is it's a very very convincing magic trick and works so well that people are blinded to its flaws.See https://en.wikipedia.org/wiki/ELIZA_effect?useskin=vector
(DIR) Post #AhgX86DZvuAwrkNv0K by sun@shitposter.world
2024-05-08T15:12:08.717213Z
1 likes, 0 repeats
@ljs @ptesarik @ben @pavel @vbabka I started as skeptical as you but now I use it every day and even with it being wrong a lot it saves me a shitload of time, and I am very sure I am not just bad at measuring that time. Most of your objections aren't worse than relying on stackexchange or coworkers to find an answer and in regular use I am not running into cases where I'm getting bugs as a result of alien AI problem solving.the steals software, carbon costs etc are value judgments or a matter of perspective I guess, I respect your objections to LLM on those points even if I disagree.Please have a wonderful morning/day/evening.
(DIR) Post #AhgcqTVfCCQRcGp624 by ptesarik@fosstodon.org
2024-05-08T16:10:46Z
0 likes, 0 repeats
@sun @ben @ljs @pavel @vbabka I have a feeling that it's no coincidence this poster's Mastodon instance is called âshitposter.worldââŚ(This is meant to be funny, not an ad hominem argument.)
(DIR) Post #AhgcqUUzWBTwgTW4ci by sun@shitposter.world
2024-05-08T16:16:15.694307Z
2 likes, 1 repeats
@ptesarik @ben @ljs @pavel @vbabka I do enjoy shitposting but I'm telling the truth. I ended up buying an nvidia 4090 to play around with local LLMs that I train myself as it's more ethical than using OpenAI (also OpenAI censors badly.)I have been using it more for generating stats and fighting move descriptions for a game than for answering programming questions. but I intend to train it for my own use for personal programming projects so that I can make sure the result is licensed with a libre license to avoid that problem as well.I don't want to get too down on the gentleman I replied to but I believe LLM can be used intelligently and ethically if done with care. It won't be by OpenAI, though. Fuck those guys.
(DIR) Post #AhiG81cJjI0dgwFrEW by ben@m.benui.ca
2024-05-08T03:19:24Z
0 likes, 1 repeats
Also that CC claims that training an AI on data is "fair use". So fuck Creative Commons I guess.https://creativecommons.org/2023/02/17/fair-use-training-generative-ai/