[HN Gopher] The GPT era is already ending
___________________________________________________________________
The GPT era is already ending
Author : bergie
Score : 22 points
Date : 2024-12-08 21:55 UTC (1 hours ago)
(HTM) web link (www.theatlantic.com)
(TXT) w3m dump (www.theatlantic.com)
| talldayo wrote:
| With a whimper too, not the anticipated bang.
| aegypti wrote:
| https://archive.ph/xUJMG
| comeonbro wrote:
| Insane cope. Emily Bender and Gary Marcus _still_ trying to push
| "stochastic parrot", the day after o1 causes what was one of the
| last remaining credible LLM reasoning skeptics (Chollet) to admit
| defeat.
| nwhnwh wrote:
| Push what?
| observationist wrote:
| Anti AI grift and FUD and more or less awful takes, cashing
| in on their credentials to the detriment of their respective
| institutions.
| beepbooptheory wrote:
| Yeah they are definitely making a lot of money doing this
| compared to being on the other side.
| jazz9k wrote:
| It ended because its a glorified search engine now. All of the
| more powerful functionality was limited or removed
|
| My guess is to sell it to governments and anyone else willing to
| pay for it.
| MuffinFlavored wrote:
| Source/citations/examples?
| juped wrote:
| The "GPT Era" ended with OpenAI resting on its junky models while
| Anthropic runs rings around it, but sure, place a puff piece in
| the Atlantic; at least it's disclosed sponsored content?
| Zardoz89 wrote:
| And presented in audio narration at the head of the written
| article: "Produced by ElevenLabs and News Over Audio (Noa) using
| AI narration. Listen to more stories on the Noa app."
| OutOfHere wrote:
| I like AIs with a personality; I like them to shoot from the hip.
| 4o does this better than o1.
|
| o1 however is often better for coding and for puzzle-solving,
| which are not the vast majority of uses of LLMs.
|
| o1 is so much more expensive than 4o that it makes zero sense for
| it to be a general replacement. This will never change because o1
| will always use more tokens than 4o.
| akira2501 wrote:
| > I like AIs with a personality
|
| You are confusing training artifacts for "personality."
|
| > could be better for coding and for puzzle-solving, which are
| not the vast majority of uses of LLMs.
|
| To see a product fail to evolve and merely stratify itself
| gives a solid hint as to what it's likely future is going to
| be.
| OutOfHere wrote:
| That's not what I mean here by personality. I mean that for
| everyday chats, I like AIs to freely express its own internal
| beliefs about something without having to think them through.
| It should know when to override what I said, to not be a mere
| robot, and this is where 4o shines.
|
| 4o versions have progressively become better at instruction
| following. I don't think their peak has been reached yet.
| Skunkleton wrote:
| Please read the article before posting comments, or at least read
| a summary. The article is saying that GPT-4o style models are
| reaching their peak, and are being replaced by o1 style models.
| The article does not make value judgements on the usefulness of
| existing AI or business viability of AI companies.
| akira2501 wrote:
| > does not make value judgements
|
| So we are not allowed to? The Hacker News gatekeeping instinct
| is particularly hilarious to me.
| flappyeagle wrote:
| Just read the article before commenting on the article. Weird
| you consider this gatekeeping. Instead of just not talking
| out of your ass
| Dilettante_ wrote:
| I started skimming about 1/3 through this article. Looks to be
| just a fluff piece about how cool the old AI models were and how
| they pale in comparison with what's in the works, with about 2 to
| 5 lines of shallow 'criticism' thrown in as an alibi?
|
| Ten minutes and a teeny bit of mental real estate I will never
| get back.
___________________________________________________________________
(page generated 2024-12-08 23:01 UTC)