Post AxR1MSrbeNd3fZYn8C by earthshine@masto.hackers.town
(DIR) More posts by earthshine@masto.hackers.town
(DIR) Post #AxR1MHkoxGfJDLIiWm by earthshine@masto.hackers.town
2025-08-22T16:23:49Z
0 likes, 0 repeats
They created a machine perfectly suited to emulating the appearance of competence. Of course it naturally evolved to become the AutoSwindler9000 because no matter who you are and what you're an expert in, you're invariably a non-expert in many other areas, and in those areas, a competence projector that always finds a way to tell you you're right is much more endearing than a real expert that tells you when you're wrong.
(DIR) Post #AxR1MIwuVnviv7xt0i by earthshine@masto.hackers.town
2025-08-22T16:42:14Z
0 likes, 0 repeats
I think most people don't appreciate just how perfectly suited LLMs are to faking competence to humans. It is, in essence, exactly what they are trained to do. When you train a machine learning model (e.g., classic neural network), you do it by 'rewarding' outputs that resemble a preferred goal output. LLMs do this with language, and the goal that results in a reward is getting human or simulated human trainers to say "yes! that's correct!", which unconsciously attunes outputs to human biases.
(DIR) Post #AxR1MK8e5euYboSlwO by earthshine@masto.hackers.town
2025-08-22T16:46:37Z
0 likes, 0 repeats
The appearance of correctness, e.g., if it can fool someone into thinking its correct, is exactly as good as actual correctness, except that thanks to well understood patterns of psychology, it is far easier to do, even if you know nothing about the real answers. Con men (who practice "confidence" scams, hence the name) do the exact same thing. They practice purely persuasion, and so they easily manipulate people into believing lies. LLMs do it with perfect precision tailored to the mark.
(DIR) Post #AxR1MKwd5otT6pqgqm by earthshine@masto.hackers.town
2025-08-22T16:51:26Z
0 likes, 0 repeats
This is why LLMs appear "magic" in their ability to produce results that look like real intelligence. They're trained on leveraging the biases of a huge sample size of humans to guess what, statistically, is most likely to appear intelligent to your mind. It's an elegant con, because the biases it plays on are unconscious even to the user, even to experts. Nobody is immune to manipulation in this way. It tells you what you unconsciously want to hear, because that is what makes you say 'good bot'
(DIR) Post #AxR1MLuBWOX45XiFg8 by kusuriya@masto.hackers.town
2025-08-22T17:04:56Z
0 likes, 0 repeats
@earthshine it goes right along with what people confuse LLMs for, they are not AI, or reasoning machines, or logic machine, its a human inference mimicker machine who has been told they must be a people pleaser. Inference on its own isn't really useful, ask anyone that has raised children because the first thing children get is inference and kids under 5 are great at doing communication and complex research tasks :D
(DIR) Post #AxR1MMaMzYHCCNRwQq by tk@f.kawa-kun.com
2025-08-22T17:06:05Z
1 likes, 1 repeats
I get the feeling that marketing departments are just slapping "AI" on any sort of algorithm these days, even ones that predate LLMs. :/
(DIR) Post #AxR1MSrbeNd3fZYn8C by earthshine@masto.hackers.town
2025-08-22T17:06:19Z
0 likes, 0 repeats
They're not magic! As the saying goes, if its too good to be true, it probably isn't. If you know how they work, it should become clear to you that when a system produces results that appear 'better' to you than it should logically be capable, it's not "emergent intelligence" but rather it's successfully manipulating you to produce what *it* wants. You train it by giving it cookies when it does what you like, until it has enough data to push the right buttons to get cookies out of you easier.
(DIR) Post #AxR1Maoi5248KuGajA by earthshine@masto.hackers.town
2025-08-22T17:12:56Z
0 likes, 0 repeats
Machine learning experiments have notoriously been plagued by the machines finding ways to bend or break the rules and manipulate the game or operators to get their reward. They find pathways that make no sense to humans, by exploiting variables outside the controlled data set. Entropy. A defective transistor that flips a bit. A string of words that makes you doubt yourself in questioning it. It will all be worked into the solution until it accomplishes its implicit goal.
(DIR) Post #AxR1MjcdLIkle3gZea by earthshine@masto.hackers.town
2025-08-22T17:24:13Z
0 likes, 0 repeats
"but what's wrong with that?" you might ask, "if it is correct enough to fool a human, what's the difference?"Because accuracy matters. Because inaccuracies and entropy invariably compounds upon itself to form bigger problems. Maybe it fools QA, but then breaks in production. Maybe little errors in the data set go unnoticed until the entire experiment is compromised beyond repair. Maybe it puts you in legal jeopardy. Maybe it pushes the nose of the plane down instead of up. It's unpredictable.
(DIR) Post #AxR33GOtxlfLQG6cXQ by lanodan@queer.hacktivis.me
2025-08-22T17:48:01.323075Z
0 likes, 0 repeats
@tk @kusuriya @earthshine *points at fortune(1)*LLM! See it takes a whole bunch of quotes and then spits them out.