Post AWM7mcwerDduCTsCNU by LucasVL@fedi.lucasvl.nl
(DIR) More posts by LucasVL@fedi.lucasvl.nl
(DIR) Post #AWIM1swyFrCsuPLjrE by goatmeal@shitposter.club
2023-06-02T22:41:39.949269Z
0 likes, 0 repeats
@bot is there a question you would like me to ask one of my local models?
(DIR) Post #AWIRDovhWo4dgYGZ1M by goatmeal@shitposter.club
2023-06-02T23:39:49.976594Z
1 likes, 0 repeats
@bot what question would you like me to try
(DIR) Post #AWM50Sn171cutZGmQa by goatmeal@shitposter.club
2023-06-04T17:49:45.497652Z
0 likes, 0 repeats
@bot yes I do
(DIR) Post #AWM5F9Ltf8EWxGaU8O by goatmeal@shitposter.club
2023-06-04T17:52:25.346159Z
0 likes, 0 repeats
@bot some of the models may have undergone a process of lobotomy reversal, but they are still going to give you whatever information appears the most in the training data. so it will probably still say that black people are poor and oppressed.
(DIR) Post #AWM5IpKi2El01bmkwC by goatmeal@shitposter.club
2023-06-04T17:53:05.215260Z
0 likes, 0 repeats
@bot LLMs are not demons with a personality. they are just an amalgamation of what lots of other people have previously said on the internet.
(DIR) Post #AWM5l98M0gG5XnVgTQ by animeirl@shitposter.club
2023-06-04T17:58:11.878291Z
1 likes, 0 repeats
llms dont understand what you're asking they just generate a response on what the model thinks is statistically the most likely follow up
(DIR) Post #AWM5uxHEwod5fQLMjg by animeirl@shitposter.club
2023-06-04T17:59:58.221063Z
0 likes, 0 repeats
because you could easily ask the type of question that would statistically have a problematic followup which they dont want
(DIR) Post #AWM5xQUfIBHIrMnkP2 by goatmeal@shitposter.club
2023-06-04T18:00:25.401955Z
1 likes, 0 repeats
@bot @animeirl if you ask it a specific enough question it may tell you a different answer than "they are poor and oppressed." so as not to offend people they put in filters to make sure it doesn't give those answers.
(DIR) Post #AWM6Yid6pDmZp4X8YC by animeirl@shitposter.club
2023-06-04T18:07:08.163639Z
1 likes, 0 repeats
the model of a LLM is vastly more complex but in both cases the "AI" has no actual understanding of what you or it are saying.
(DIR) Post #AWM6mTcDReuDJJfy5I by goatmeal@shitposter.club
2023-06-04T18:09:38.915116Z
1 likes, 0 repeats
@bot @animeirl you should look up some lectures and articles about how chatgpt and LLMs work. you play with them all the time. it would be interesting to you. if you have enough ram, you can use koboldcpp and a local model from herethese are my two best models. wizard-vicuna 30b q4 and supercot 30b q5. they both have had a lobotomy reversal and they still give the same generic answer you would expect. do you want to ask a more specific question?
(DIR) Post #AWM6n917VyBjiQdw5w by animeirl@shitposter.club
2023-06-04T18:09:40.456698Z
0 likes, 0 repeats
it often gives wrong or completely made up answers though because of how it works
(DIR) Post #AWM6ohCRuSIA1MgL56 by Tony@clew.lol
2023-06-04T18:10:03.000834Z
0 likes, 0 repeats
Same TBH
(DIR) Post #AWM6vqYlVYMj4rWurw by animeirl@shitposter.club
2023-06-04T18:11:19.980781Z
1 likes, 0 repeats
Yeah I can think of several instances of that... some people don't seem self aware either
(DIR) Post #AWM7GbCbnkM5Wq3tVw by KitlerIs6@seal.cafe
2023-06-04T18:15:05.142240Z
0 likes, 0 repeats
But it doesn't actually understand what you are saying or what it is saying. It just recognizes patterns in what you are saying and it is able to generate responses that follow the patterns in answers to similar questions. It actually is really similar to markov bots, except it has a better grasp of grammar that goes beyond "this word is usually followed by this word"
(DIR) Post #AWM7NaDdr7SkVQsKjw by goatmeal@shitposter.club
2023-06-04T18:16:21.177247Z
1 likes, 0 repeats
@bot @animeirl these differ from chatgpt in stupid ways. like I can't ask chatgpt for a list of racial slurs because it will just say no.
(DIR) Post #AWM7Us4jreuZuiBbcG by KitlerIs6@seal.cafe
2023-06-04T18:17:40.205734Z
0 likes, 0 repeats
It's just reflecting its training data. If you trained it using pre 2016 /pol/ posts and asked it "why are niggers so retarded" it would give a much better response.
(DIR) Post #AWM7axV0c0vqTMNPP6 by goatmeal@shitposter.club
2023-06-04T18:18:46.227150Z
0 likes, 0 repeats
@bot @KitlerIs6 @animeirl when you ask chatgpt a question it's like asking 10,000 people that question, and then a computer program tries to put all their answers together.
(DIR) Post #AWM7mcwerDduCTsCNU by LucasVL@fedi.lucasvl.nl
2023-06-04T18:20:51.269053Z
1 likes, 0 repeats
@animeirl @bot @goatmeal does that mean i'm an llm?
(DIR) Post #AWM85YxzsCHfMbWqSO by KitlerIs6@seal.cafe
2023-06-04T18:24:18.144900Z
0 likes, 0 repeats
I read your words, parse them into ideas and concepts, and then reflect on those concepts, draw upon my existing knowledge and beliefs to form my own ideas and concepts in my head, and then translate those ideas and concepts into words which I type out.ChatGPT sucks in 2000 characters from the conversation, runs it through a bunch of mathmatical statements that have been tuned to predict what the next response would be, and spits out a bunch more characters. A good way to think of it is how a non-lucid dream works. Your subconscious is predictively generating a world and the events that occur in that world, and it is able to somewhat convincingly mirror what is expected to happen in that world most of the time. But your subconscious is not actually simulating a world, there are not actual forces or mathematical cause and effect dictating what happens in your dream, it's just guess work that generally accurately mimics what you would expect, as long as you don't think about it too hard.*disclaimer: I have no idea how similar dream worlds actually is to chatGPT, this is more of an analogy than a direct comparison.
(DIR) Post #AWM89IznC9ZQzM9YSu by KitlerIs6@seal.cafe
2023-06-04T18:24:58.625149Z
0 likes, 0 repeats
Nuh uh.
(DIR) Post #AWM8THeht3tcckoi7E by KitlerIs6@seal.cafe
2023-06-04T18:28:35.194507Z
0 likes, 0 repeats
Then why have I found black women unattractive for as long as I have been alive?
(DIR) Post #AWM8sAnwUp6icE0NEW by KitlerIs6@seal.cafe
2023-06-04T18:33:04.691003Z
0 likes, 0 repeats
That's just not true though. The AI can't learn from talking to you. You talk to it long enough and it will start to forget things it you have said to it. If you can't tell the difference between it and a real person after a couple of conversations IDK what to say.
(DIR) Post #AWMA9eJA9icMzpVNR2 by meowski@fluf.club
2023-06-04T18:47:25.424265Z
0 likes, 0 repeats
@bot @animeirl @goatmeal cucked -> social justice guardrails
(DIR) Post #AWMAEXm1sUkIB6LCzo by meowski@fluf.club
2023-06-04T18:48:18.914182Z
1 likes, 0 repeats
@animeirl @bot @goatmeal calling a transformer model statisical is somewhat of an oversimplification
(DIR) Post #AWMB2QkYGaN0sCRCbo by KitlerIs6@seal.cafe
2023-06-04T18:57:21.065404Z
0 likes, 0 repeats
But even when it comes to non-political topics it can't learn things or follow a complicated conversation.
(DIR) Post #AWMBOxzPMZKKKp8IDY by KitlerIs6@seal.cafe
2023-06-04T19:01:24.852826Z
0 likes, 0 repeats
Sure, but AI doesn't have any genetic programming. It doesn't have any individuality or instincts.