[HN Gopher] What Is ChatGPT Doing and Why Does It Work?
___________________________________________________________________
What Is ChatGPT Doing and Why Does It Work?
Author : washedup
Score : 49 points
Date : 2023-02-14 21:48 UTC (1 hours ago)
(HTM) web link (writings.stephenwolfram.com)
(TXT) w3m dump (writings.stephenwolfram.com)
| ortusdux wrote:
| Tangentially related, but I really liked Tom Scott's recent video
| on ChatGPT.
|
| https://www.youtube.com/watch?v=jPhJbKBuNnA
| sharemywin wrote:
| It's Just Adding One Word at a Time
|
| I'm curious how do you write?
| sakex wrote:
| I don't re-read the whole sentence before every word I type,
| contrary to transformers. Also, I can go back and correct my
| mistakes.
| bheadmaster wrote:
| > I don't re-read the whole sentence before every word I type
|
| Maybe you don't do it consciously, but your brain is quite
| aware of every word you typed before the word you're typing.
| SketchySeaBeast wrote:
| My brain knows where it's going to go by the end of the
| sentence as well, it's conceptualized the whole sentence
| and my hands are on the road to completing the sentence,
| I'm not only aware of what I've written so far.
| bheadmaster wrote:
| > My brain knows where it's going to go by the end of the
| sentence as well
|
| How would you know that?
|
| Sincere question, because to me it feels like my brain is
| improvising word by word when typing out this sentence. I
| often delete and retype it until it feels right, but in
| the process of _typing_ a single sentence, I 'm just
| chaining words one after another the way they feel right.
|
| In other words, my brain doesn't exactly _know_ the
| sentence beforehand - it improvises by chaining the
| words, while applying a fitness function _F(sentence) - >
| feeling_ that tells it whether it corresponds to what I
| wanted to say or not.
| laurensr wrote:
| Not everyone has an inner monologue
| https://mymodernmet.com/inner-monologue/
| Ensorceled wrote:
| > I'm just chaining words one after another the way they
| feel right.
|
| I think we know where we are going in a conceptual sense,
| the words start feeling right because they are taking us
| to that destination, or not.
|
| If I leave a sentence in the middle for some reason, when
| I return I often have zero idea how to finish the
| sentence or even what the sentence fragment means.
| SketchySeaBeast wrote:
| In my case it's because my internal dialogue is saying
| the sentence before I get to the end of it. I usually
| have the entire sentence in my inner dialogue before I
| even start typing. Will I edit during typing? Sure, but I
| have a first version in my head before I start.
| bheadmaster wrote:
| > I usually have the entire sentence in my inner dialogue
| before I even start typing
|
| Interesting. Perhaps the question then becomes, does your
| inner dialogue simply chain the words one after another,
| or does it come up with sentences as whole?
| bgun wrote:
| Premise, outline, augmentation.
| Swizec wrote:
| I have an idea or concept first, then I translate that into
| analogies/stories, then I put those down as
| sentences/paragraphs, then I wordsmith to make it flow better.
|
| At no point am I in a mode where I say a word and think _"
| What's most likely to come next?"_. The concept/idea comes
| first. Likely I will try different angles until I find what
| lands with the audience.
|
| ChatGPT works more like a stereotypical extrovert: It doesn't
| think then output, it uses output _to_ think. Which can be a
| fine mode for humans too. Sometimes, when you don 't know what
| you're trying to say yet or when you need to verbalize what
| your gut is thinking.
| IIAOPSW wrote:
| There's a lot of answers here, but as Feynman said "that which
| I can't build I don't understand." If you can't make something
| that writes for you, you don't really understand how you write.
| That feels impossible, to be able to do something without
| understanding how you do it. Brain be like that sometimes.
| leereeves wrote:
| Understanding how ChatGPT writes doesn't translate to
| understanding how humans write/speak/think. We still don't
| understand the latter.
| JellyBeanThief wrote:
| I'd be careful about interpreting that. Can you "build"
| numbers? Yes, in one sense. No in another. And he doesn't
| specify whether being able to build something is sufficient
| for understanding it or merely necessary.
|
| And even when you've built it, what about other ways that it
| could be built? If you implement binary search iteratively,
| then perhaps you understand binary search. But do you
| understand its recursive implementation?
| sharemywin wrote:
| Sometimes I copy and paste so maybe that counts as something
| different
| Traubenfuchs wrote:
| I was just trying to do some metacognition and observe how I
| write, but apparently it's really just word by word. I neither
| form full sentences, nor full words or even just abstract
| imagination in my head. The words just appear from the
| "darkness", with some kind of sophistication postprocessor that
| tries to make some output more verbose or use more appropriate
| adjectives. Is this how people with aphantasia live? I don't
| like it. I expected something more sophisticated. Maybe that's
| why my writing often appears like a barely connected verbal
| diarrhoea that looks like an "inner monologue" writing task
| back in school.
|
| How do you experience it?
| Ensorceled wrote:
| There is definitely something else going on ... if I stop
| writing and come back to it often the sentence fragment I was
| writing makes no sense at all. If it was word by word, I'd
| just start writing again.
|
| Also, I usually "hear" the next segment fragment in my head
| before I'm typing it.
| anothernewdude wrote:
| Mostly by rewriting and editing.
| acchow wrote:
| I think in an "idea" space, which then I transcribe into words.
| krackers wrote:
| If this idea space is linear, it gives a whole new meaning to
| an idea being orthogonal to another.
| yen223 wrote:
| I've always thought "orthogonal" as in ideas came from
| "orthogonal" as in vectors - meaning to be independent of
| each other.
| [deleted]
| Scene_Cast2 wrote:
| Well, aren't Beam Search and other searches also used and more
| sophisticated than greedy selection?
| anothernewdude wrote:
| They barely use beam search. It requires running multiple parts
| of the generation, and so is expensive.
| spion wrote:
| The answer to this is: "we don't really know as its a very
| complex function automatically discovered by means of slow
| gradient descent, and we're still finding out"
|
| Here are some of the fun things we've found out so far:
|
| - GPT style language models build a model of the world:
| https://arxiv.org/abs/2210.13382
|
| - GPT style language models end up internally implementing a mini
| "neural network training algorithm" (gradient descent fine-tuning
| for given examples): https://arxiv.org/abs/2212.10559
| xwdv wrote:
| You can approximate your own ChatGPT on your iPhone by just
| randomly selecting words that appear in the autocomplete to form
| a sentence. This is basically how ChatGPT works but better and on
| a larger scale. Give it a try you'll be surprised what comes out.
| Ensorceled wrote:
| I don't know what to do with my life right now. I just want to
| be able to be with you and I know that I am not alone in my
| feelings.
|
| Wow. Very "Valentines Day" meets therapy session.
| KRAKRISMOTT wrote:
| Not all Markovian maximum likelihood estimators are made equal,
| ChatGPT can be considered sui generis.
___________________________________________________________________
(page generated 2023-02-14 23:00 UTC)