[HN Gopher] "Language and Image Minus Cognition": An Interview w...
___________________________________________________________________
"Language and Image Minus Cognition": An Interview with Leif
Weatherby
Author : Traces
Score : 28 points
Date : 2025-06-11 14:03 UTC (3 days ago)
(HTM) web link (www.jhiblog.org)
(TXT) w3m dump (www.jhiblog.org)
| joe_the_user wrote:
| I would claim that any reasonable "bright line" critique of AI is
| going to be a "remainder" theory. If one models and "tightly"
| articulates a thing that AI can't do, well, one has basically
| created a benchmark that systems are going to gradually (or
| quickly) move to surpassing. But the ability to surpass
| benchmarks isn't necessarily an ability to do anything and one
| can still sketch which remainders tend to remain.
|
| The thing is, high social science theorists like the person
| interviewed, want to claim a positive theory rather than a
| remainder theory because such a theory seems more substantial.
| But for the above reason, I think such substance is basically an
| illusion.
| skhameneh wrote:
| Anecdotally, LLMs as a whole haven't made my life noticeably
| any better. I see some great use cases and some impressive
| demos, but they are just that. I look at how many things that
| LLMs have noticeably made worse and by my own impression it
| outweighs improvements.
|
| - I asked when a software EOL will be, the LLM response
| (incorrectly) provided past tense for an event yet to happen. -
| The replacement of Google Assistant with Gemini broke using my
| phone while locked and the home automation is noticeably less
| reliable. - I asked an LLM about whether a device "phones home"
| and the answer was wrong. - I asked an LLM to generate some
| boiler plate code with very specific instructions and the
| generated code was unusable. - I gave critical feedback to a
| company that works with LLMs regarding a poor experience (along
| with some suggestions) and they seemed to have no interest in
| making adjustments. - I've seen LLM note takers with incorrect
| notes, often skipping important or nuanced details.
|
| I have had good experiences with LLMs and other ML models, but
| most of those experiences were years ago before LLMs were being
| unnecessarily shoved into every possible scenario. At the end
| of the day, it doesn't matter if the experience is powered by
| an LLM, it matters whether the experience is effective overall
| (by many different measures).
| gametorch wrote:
| My experience is the opposite.
|
| I have an extensive, strong traditional CS background. I
| built and shipped a production grade SaaS in 2 months that
| has paying users. I've built things in day that would have
| taken me 3+ days manually. Through all of that, I hardly
| wrote a single line of code. It was all GPT-4.1 and o3.
|
| Granted, I think you need quite a lot of knowledge and
| experience to know how to come up with coherent prompts and
| to be able to do the surgery necessary to get yourself out of
| a jam. But LLMs have easily 3x'd my productivity by very
| quantifiable metrics, like number of features shipped, for
| example.
|
| I've noticed people who actually build stuff agree with me.
| That's because it's such a tremendous addition of value to
| our lives. Armchair speculators seem to see only the negative
| side.
___________________________________________________________________
(page generated 2025-06-14 23:00 UTC)