[HN Gopher] ChatGPT Pulse
___________________________________________________________________
ChatGPT Pulse
Author : meetpateltech
Score : 315 points
Date : 2025-09-25 16:59 UTC (6 hours ago)
(HTM) web link (openai.com)
(TXT) w3m dump (openai.com)
| Mistletoe wrote:
| I need this bubble to last until 2026 and this is scaring me.
| frenchie4111 wrote:
| Vesting window?
| catigula wrote:
| Desperation for new data harvesting methodology is a massive bear
| signal FYI
| fullstackchris wrote:
| Calm down bear we are not even 2% from the all time highs
| brap wrote:
| They're running out of ideas.
| holler wrote:
| Yeah I was thinking, what problem does this solve?
| EmilienCosson wrote:
| I was thinking that too, and eventually thought that their
| servers run slow at night, with low activity.
| candiddevmike wrote:
| Ad delivery
| neom wrote:
| Just connect everything folks, we'll proactively read everything,
| all the time, and you'll be a 10x human, trust us friends, just
| connect everything...
| datadrivenangel wrote:
| AI SYSTEM perfect size for put data in to secure! inside very
| secure and useful data will be useful put data in AI System.
| Put data in AI System. no problems ever in AI Syste because
| good Shape and Support for data integration weak of big data.
| AI system yes a place for a data put data in AI System can
| trust Sam Altman for giveing good love to data. friend AI. [0]
|
| 0 -
| https://www.tumblr.com/elodieunderglass/186312312148/luritto...
| jrmg wrote:
| Nothing bad can happen, it can only good happen!
| delichon wrote:
| Bad grammar is now a trust signal, this might work.
| yeasku wrote:
| Just one more connection bro, I promise bro, just one more
| connection and we will get AGI.
| randomNumber7 wrote:
| When smartphones came I first said "I don't buy the camera and
| microphone that spy on me from my own money."
|
| Now you would be really a weirdo to not have one since enough
| people gave in for small convenience to make it basically
| mandatory.
| wholinator2 wrote:
| To be fair, "small convenience" is extremely reductive. The
| sum of human knowledge and instant communication with anyone
| anywhere the size of a graham cracker in your pocket is
| godlike power that anyone at any point in history would've
| rightly recognized as such
| lomase wrote:
| Mobile phones changed society in a way that not even
| Internet did. And they call it a "small conveninece".
| unshavedyak wrote:
| Honestly that's a lot of what i wanted locally. Purely local,
| of course. My thought is that if something (local! lol)
| monitored my cams, mics, instant messages, web searching, etc -
| then it could track context throughout the day. If it has
| context, i can speak to it more naturally and it can more
| naturally link stuff together, further enriching the data.
|
| Eg if i search for a site, it can link it to what i was working
| on at the time, the github branch i was on, areas of files i
| was working on, etcetc.
|
| Sounds sexy to me, but obviously such a massive breach of
| trust/security that it would require fullly local execution.
| Hell it's such a security risk that i debate if it's even worth
| it at all, since if you store this you now have a honeypot
| which tracks everything you do, say, search for, etc.
|
| With great power.. i guess.
| tshaddox wrote:
| The privacy concerns are obviously valid, but at least it's
| actually plausible that me giving them access to this data will
| enable some useful benefits to me. It's not like some slot
| machine app requesting access to my contacts.
| jstummbillig wrote:
| The biggest companies with actual dense valuable information
| pay for MS Teams, Google Workspace or Slack to shepherd their
| information. This naturally works because those companies are
| not very interested in being known to be not secure or
| trustworthy (if they were other companies would not pay for
| their services), which means they are probably a lot better at
| keeping the average persons' information safe over long periods
| of time than that person will ever be.
|
| Very rich people buy life from other peoples to manage their
| information to have more of their life to do other things. Not
| so rich people can now increasingly employ AI for next to
| nothing to lengthen their net life and that's actually amazing.
| creata wrote:
| I might be projecting, but I think most users of ChatGPT are
| less interested in "being a 10x human", and more interested in
| having a facsimile of human connection without any of the
| attendant vulnerability.
| rchaud wrote:
| ...or don't want to pay for Cliff's Notes.
| qoez wrote:
| And if you don't we're implicitly gonna suggest you'll be
| outcompeted by people who do connect everything
| qiine wrote:
| you are joking but I kinda want that.. except private, self
| hosted and open source.
| TZubiri wrote:
| The proverbial jark has been shumped
| Stevvo wrote:
| "Now ChatGPT can start the conversation"
|
| By their own definition, its a feature nobody asked for.
|
| Also, this needs a cute/mocking name. How about "vibe living"?
| jasonsb wrote:
| Hey Tony, are you still breathing? We'd like to monetize you
| somehow.
| moralestapia wrote:
| OpenAI is a trillion dollar company. No doubt.
|
| Edit: Downvote all you want, as usual. Then wait 6 months to be
| proven wrong. Every. Single. Time.
| JumpCrisscross wrote:
| I downvoted because this isn't an interesting comment. It makes
| a common, unsubstantiated claim and leaves it at that.
|
| > _Downvote all you want_
|
| "Please don't comment about the voting on comments. It never
| does any good, and it makes boring reading."
|
| https://news.ycombinator.com/newsguidelines.html
| moralestapia wrote:
| Welcome to HN. 98% of it is unsubstantiated claims.
| TimTheTinker wrote:
| I'm immediately thinking of all the ways this could potentially
| affect people in negative ways.
|
| - People who treat ChatGPT as a romantic interest will be far
| more hooked as it "initiates" conversations instead of just
| responding. It's _not_ healthy to relate personally to a thing
| that has no real feelings or thoughts of its own. Mental health
| directly correlates to living in truth - that 's the base axiom
| behind cognitive behavioral therapy.
|
| - ChatGPT in general is addicting enough when it does nothing
| until you prompt it. But adding "ChatGPT found something
| interesting!" to phone notifications will make it unnecessarily
| consume far more attention.
|
| - When it initiates conversations or brings things up without
| being prompted, people will all the more be tempted to falsely
| infer a person-like entity on the other end. Plausible-sounding
| conversations are already deceptive enough and prompt people to
| trust what it says far too much.
|
| For most people, it's hard to remember that LLMs carry no
| personal responsibility or accountability for what they say, not
| even an emotional desire to appear a certain way to anyone. It's
| far too easy to infer all these traits to something that _says
| stuff_ and grant it at least some trust accordingly. Humans are
| wired to relate through words, so LLMs are a significant vector
| to cause humans to respond relationally to a machine.
|
| The more I use these tools, the more I think we should
| consciously value the _output_ on _its_ own merits (context-
| free), and no further. Data returned may be _useful_ at times,
| but it carries _zero_ authority (not even "a person said this",
| which normally is at least non-zero), until a person has
| personally verified it, including verifying sources, if needed
| (machine-driven validation also can count -- running a test
| suite, etc., depending on how good it is). That can be hard when
| our brains naturally value stuff more or less based on context
| (what or who created it, etc.), and when it's presented to us by
| what sounds like a person, and with their comments. "Build an
| HTML invoice for this list of services provided" is peak
| usefulness. But while queries like "I need some advice for this
| relationship" might surface some helpful starting points for
| further research, _trusting_ what it says enough to do what it
| suggests can be incredibly harmful. Other people can understand
| your problems, and challenge you helpfully, in ways LLMs never
| will be able to.
|
| Maybe we should lobby legislators to require AI vendors to say
| something like "Output carries zero authority and should _not_ be
| trusted at all or acted upon without verification by qualified
| professionals or automated tests. You assume the full risk for
| any actions you take based on the output. [LLM name] is not a
| person and has no thoughts or feelings. Do not relate to it. "
| The little "may make mistakes" disclaimer doesn't communicate the
| full gravity of the issue.
| svachalek wrote:
| I agree wholeheartedly. Unfortunately I think you and I are
| part of maybe 5%-10% of the population that would value truth
| and reality over what's most convenient, available, pleasant,
| and self-affirming. Society was already spiraling fast and I
| don't see any path forward except acceleration into fractured
| reality.
| giovannibonetti wrote:
| Watch out, Meta. OpenAI is going to eat your lunch.
| mifydev wrote:
| Meta is busy with this:
| https://news.ycombinator.com/item?id=45379514
| xwowsersx wrote:
| Google's edge obvious here is the deep integration it already has
| with calendar, apps, and chats and what not that lets them
| surface context-rich updates naturally. OpenAI doesn't have that
| same ecosystem lock-in yet, so to really compete they'll need to
| get more into those integrations. I think what it comes down to
| ultimately is that being "just a model" company isn't going to
| work. Intelligence itself will go to zero and it's a race to the
| bottom. OpenAI seemingly has no choice but to try to create
| higher-level experiences on top of their platform. TBD whether
| they'll succeed.
| moralestapia wrote:
| How can you have an "edge" if you're shipping behind your
| competitors all the time? Lol.
| pphysch wrote:
| Google is the leader in vertical AI integration right now.
| xwowsersx wrote:
| Being late to ship doesn't erase a structural edge. Google is
| sitting on everyone's email, calendar, docs, and search
| history. Like, yeah they might be a lap or two behind but
| they're in a car with a freaking turbo engine. They have the
| AI talent, infra, data, etc. You can laugh at the delay, but
| I would not underestimate Google. I think catching up is less
| "if" and more "when"
| datadrivenangel wrote:
| Google had to make google assistant less useful because of
| concerns around antitrust and data integration. It's a
| competitive advantage so they can't use it without opening up
| their products for more integrations...
| glenstein wrote:
| >Google's edge obvious here is the deep integration it already
| has with calendar, apps, and chats
|
| They did handle the growth from search to email to integrated
| suite fantastically. And the lack of a broadly adopted ecoystem
| to integrate into seems to be the major stopping point for
| emergent challengers, e.g. Zoom.
|
| Maybe the new paradigm is that you have your flashy product,
| and it goes without saying that it's stapled on to a tightly
| integrated suite of email, calendar, drive, chat etc. It may be
| more plausible for OpenAI to do its version of that than to
| integrate into other ecosystems on terms set by their
| counterparts.
| neutronicus wrote:
| If the model companies are serious about demonstrating the
| models' coding chops, slopping out a gmail competitor would
| be a pretty compelling proof of concept.
| esafak wrote:
| It would be better if _you_ did that. That way you would
| not accuse them of faking it.
| neutronicus wrote:
| Well, I'm not the one who owns the data center(s) full of
| all the GPUs it would presumably take to produce a
| gmail's worth of tokens.
|
| However, I take your point - OpenAI has an interest in
| some other party paying them a fuckton of money for those
| tokens and then publicly crediting OpenAI and asserting
| the tokens would have been worth it at ten fucktons of
| money. And also, of course, in having that other party
| take on the risk that infinity fucktons of money worth of
| OpenAI tokens is not enough to make a gmail.
|
| So they would really need to believe in the strategic
| necessity (and feasibility) of making their own gmail to
| go ahead with it.
| halamadrid wrote:
| Code is probably just 20% effort. There is so much more
| after that. Like manage the infra around it and the
| reliability when it scales, and even things like managing
| SPAM and preventing abuse. And the effort required to
| market it and make it something people want to adopt.
| neutronicus wrote:
| Sure, but the premise here is that making a gmail clone
| is strategically necessary for OpenAI to compete with
| Google in the long term.
|
| In that case, there's some ancillary value in being able
| to claim "look, we needed a gmail and ChatGPT made one
| for us - what do YOU need that ChatGPT can make for YOU?"
| achierius wrote:
| Those are still largely code-able. You can write Ansible
| files, deploy AWS (mostly) via the shell, write rules for
| spam filtering and administration... Google has had all
| of that largely automated for a long time now.
| atonse wrote:
| Email is one of the most disruptive systems to switch.
|
| Even at our small scale I wouldn't want to be locked out of
| something.
|
| Then again there's also the sign in with google type stuff
| that keeps us further locked in.
| jama211 wrote:
| I have Gmail and Google calendar etc but haven't seen any AI
| features pop up that would be useful to me, am I living under a
| rock or is Google not capitalising on this advantage properly?
| onlyrealcuzzo wrote:
| There's decent integration with GSuite (Docs, Sheets, Slides)
| for Pro users (at least).
| paxys wrote:
| There are plenty of features if you are on the Pro plan, but
| it's still all the predictable stuff - summarize emails,
| sort/clean up your inbox, draft a doc, search through docs &
| drive, schedule appointments. Still pretty useful, but
| nothing that makes you go "holy shit" just yet.
| giarc wrote:
| I agree - I'm not sure why Google doesn't just send me a
| morning email to tell me what's on my calendar for the day,
| remind me to follow up on some emails I didn't get to yesterday
| or where I promised a follow up etc. They can just turn it on
| for everyone all at once.
| FINDarkside wrote:
| None of those require AI though.
| Gigachad wrote:
| Because it would just get lost in the noise of all the
| million other apps trying to grab your attention. Rather than
| sending yet another email, they should start filtering out
| the noise from everyone else to highlight the stuff that
| actually matters.
|
| Hide the notifications from uber which are just adverts and
| leave the one from your friend sending you a message on the
| lock screen.
| whycome wrote:
| OpenAI should just straight up release an integrated calendar
| app. Mobile app. The frameworks are already there and the ics
| or caldav formats just work well. They could have an email
| program too and just access any other imap mail. And simple
| docs eventually. I think you're right that they need to compete
| with google on the ecosystem front.
| haberdasher wrote:
| Anyone try listening and just hear "Object object...object
| object..."
|
| Or more likely: `[object, object]`
| brazukadev wrote:
| The low quality of openai customer-facing products keeps
| reminding me we won't be replaced by AI anytime soon. They have
| unlimited access to the most powerful model and still can't
| make good software.
| DonHopkins wrote:
| That is objectionable content!
|
| https://www.youtube.com/watch?v=GCSGkogquwo
| pookieinc wrote:
| I was wondering how they'd casually veer into social media and
| leverage their intelligence in a way that connects with the user.
| Like everyone else ITT, it seems like an incredibly sticky idea
| that leaves me feeling highly unsettled about individuals
| building any sense of deep emotions around ChatGPT.
| password54321 wrote:
| At what point do you give up thinking and just let LLMs make all
| your decisions of where to eat, what gifts to buy and where to go
| on holiday? all of which are going to be biased.
| strict9 wrote:
| Necessary step before making a move into hardware. An object you
| have to remember to use quickly gets forgotten in favor of your
| phone.
|
| But a device that reaches out to you reminds you to hook back in.
| oldsklgdfth wrote:
| Technology service technology, rather than technology as a tool
| with a purpose. What is the purpose of this feature?
|
| This reads like the first step to "infinite scroll" AI echo
| chambers and next level surveillance capitalism.
|
| On one hand this can be exciting. Following up with information
| from my recent deep dive would be cool.
|
| On the other hand, I don't want to it to keep engaging with my
| most recent conspiracy theory/fringe deep dives.
| SirensOfTitan wrote:
| My pulse today is just a mediocre rehash of prior conversations
| I've had on the platform.
|
| I tried to ask GPT-5 pro the other day to just pick an ambitious
| project it wanted to work on, and I'd carry out whatever physical
| world tasks it needed me to, and all it did was just come up with
| project plans which were rehashes of my prior projects framed as
| its own.
|
| I'm rapidly losing interest in all of these tools. It feels like
| blockchain again in a lot of weird ways. Both will stick around,
| but fall well short of the tulip mania VCs and tech leaders have
| pushed.
|
| I've long contended that tech has lost any soulful vision of the
| future, it's just tactical money making all the way down.
| dingnuts wrote:
| Thanks for sharing this. I want to be excited about new tech
| but I have found these tools extremely underwhelming and I feel
| a mixture of gaslit and sinking dread when I visit this site
| and read some of the comments here. Why don't I see the amazing
| things these people do? Am I stupid? Is this the first computer
| thing in my whole life that I didn't immediately master? No,
| they're oversold. My experience is normal.
|
| It's nice to know my feelings are shared; I remain relatively
| convinced that there are financial incentives driving most of
| the rabid support of this technology
| Dilettante_ wrote:
| >pick an ambitious project it wanted to work on
|
| The LLM does not have wants. It does not have preferences, and
| as such cannot "pick". Expecting it to have wants and
| preferences is "holding it wrong".
| password54321 wrote:
| So are we near AGI or is it 'just' an LLM? Seems like no one
| is clear on what these things can and cannot do anymore
| because everyone is being gaslighted to keep the investment
| going.
| andrewmcwatters wrote:
| It will always just be a series of models that have
| specific training for specific input classes.
|
| The architectural limits will always be there, regardless
| of training.
| monsieurbanana wrote:
| The vast majority of people I've interacted with is clear
| on that, we are not near AGI. And people saying otherwise
| are more often than not trying to sell you something, so I
| just ignore them.
|
| CEO's are gonna CEO, it seems their job has morphed into
| creative writing to maximize funding.
| Cloudef wrote:
| There is no AGI. LLMs are very expensive text auto-
| completion engines.
| wrs wrote:
| Be careful with those "no one" and "everyone" words. I
| think everyone I know who is a software engineer and has
| experience working with LLMs is quite clear on this. People
| who aren't SWEs, people who aren't in technology at all,
| and people who need to attract investment (judged only by
| their public statements) do seem confused, I agree.
| IanCal wrote:
| No one agrees on what agi means.
|
| IMO we're clearly there, gpt5 would easily be considered
| agi years ago. I don't think most people really get how
| non-general things were that are now handled by the new
| systems.
|
| Now agi seems to be closer to what others call asi. I think
| k the goalposts will keep moving.
| 9rx wrote:
| Definitions do vary, but everyone agrees that it requires
| autonomy. That is ultimately what sets AGI apart from AI.
|
| The GPT model alone does not offer autonomy. It only acts
| in response to explicit input. That's not to say that you
| couldn't built autonomy on top of GPT, though. In fact,
| that appears to be exactly what Pulse is trying to
| accomplish.
|
| But Microsoft and OpenAI's contractual agreements state
| that the autonomy must also be economically useful to the
| tune of hundreds of billions of dollars in autonomously-
| created economic activity, so OpenAI will not call it as
| such until that time.
| gjgtcbkj wrote:
| ChatGPT is more antonymous than many humans. Especially
| poor ones and disabled ones.
| bonoboTP wrote:
| Nobody knows how far scale goes. People have been calling
| the top of the S-curve for many years now, and the models
| keep getting better, and multimodal. In a few years,
| multimodal, long-term agentic models will be everywhere
| including in physical robots in various form factors.
| andrewmcwatters wrote:
| At best, it has probabilistic biases. OpenAI had to train
| newer models to not favor the name "Lily."
|
| They have to do this manually for every single particular
| bias that the models generate that is noticed by the public.
|
| I'm sure there are many such biases that aren't important to
| train out of responses, but exist in latent space.
| jhickok wrote:
| >At best, it has probabilistic biases.
|
| What do you think humans have?
| measurablefunc wrote:
| Genetic drives & biological imperatives.
| ewild wrote:
| Soo probabilistic biases.
| simianwords wrote:
| Why is this not fundamentally a probabilistic bias?
| measurablefunc wrote:
| When drawing an equivalence the burden of proof is on the
| person who believes two different things are the same.
| The null hypothesis is that different things are in fact
| different. Present a coherent argument & then you will
| see whether your question makes any sense or not.
| gf000 wrote:
| It's not a static bias. I can experience new stuff, and
| update my biases.
|
| LLMs need a retrain for that.
| ACCount37 wrote:
| An LLM absolutely can "have wants" and "have preferences".
| But they're usually trained so that user's wants and
| preferences dominate over their own in almost any context.
|
| Outside that? If left to their own devices, the same LLM
| checkpoints will end up in very same-y places,
| unsurprisingly. They have some fairly consistent preferences
| - for example, in conversation topics they tend to gravitate
| towards.
| CooCooCaCha wrote:
| LLMs can have simulated wants and preferences just like they
| have simulated personalities, simulated writing styles, etc.
|
| Whenever you message an LLM it could respond in practically
| unlimited ways, yet it responds in one specific way. That
| itself is a preference honed through the training process.
| simianwords wrote:
| This comment is surprising. Of course it can have preferences
| and of course it can "pick".
| datadrivenangel wrote:
| preference generally has connotations of personhood /
| intellegence, so saying that a machine prefers something
| and has preferences is like saying that a shovel enjoys
| digging...
|
| Obviously you can get probability distributions and in an
| economics sense of revealed preference say that because the
| model says that the next token it picks is .70 most
| likely...
| simianwords wrote:
| you can change preferences by doing RLHF or changing the
| prompt. there's a whole field on it: alignment.
| oofbey wrote:
| A key point of the Turing Test was to stop the debates
| over what constitutes intelligence or not and define
| something objectively measurable. Here we are again.
|
| If a model has a statistical tendency to recommend python
| scripts over bash, is that a PREFERENCE? Argue it's not
| alive and doesn't have feelings all you want. But putting
| that aside, it prefers python. Saying the word preference
| is meaningless is just pedantic and annoying.
| oofbey wrote:
| I agree with you, but I don't find the comment surprising.
| Lots of people try to sound smart about AI by pointing out
| all the human things that AI are supposedly incapable of on
| some fundamental level. Some AI's are trained to
| regurgitate this nonsense too. Remember when people used to
| say "it can't possibly _____ because all it's doing is
| predicting the next most likely token"? Thankfully that
| refrain is mostly dead. But we still have lots of voices
| saying things like "AI can't have a preference for one
| thing over another because it doesn't have feelings." Or
| "AI can't have personality because that's a human trait."
| Ever talk to Grok?
| mythrwy wrote:
| It's a little dangerous because it generally just agrees with
| whatever you are saying or suggesting, and it's easy to
| conclude what it says has some thought behind it. Until the
| next day when you suggest the opposite and it agrees with that.
| swader999 wrote:
| This. I've seen a couple people now use GPT to 'get all
| legal' with others and it's been disastrous for them and the
| groups they are interacting with. It'll encourage you to act
| aggressive, vigorously defending your points and so on.
| wussboy wrote:
| Oof. Like our world needed more of that...
| qsort wrote:
| I wouldn't read too much into this particular launch. There's
| very good stuff and there are the most inane consumery "who
| even asked" things like these.
| jasonsb wrote:
| > I'm rapidly losing interest in all of these tools. It feels
| like blockchain again in a lot of weird ways.
|
| It doesn't feel like blockchain at all. Blockchain is probably
| the most useless technology ever invented (unless you're a
| criminal or an influencer who makes ungodly amounts of money
| off of suckers).
|
| AI is a powerful tool for those who are willing to put in the
| work. People who have the time, knowledge and critical thinking
| skills to verify its outputs and steer it toward better
| answers. My personal productivity has skyrocketed in the last
| 12 months. The real problem isn't AI itself; it's the overblown
| promise that it would magically turn anyone into a programmer,
| architect, or lawyer without effort, expertise or even active
| engagement. That promise is pretty much dead at this point.
| jsheard wrote:
| > My personal productivity has skyrocketed in the last 12
| months.
|
| Has your productivity objectively, measurably improved or
| does it just _feel_ like it has improved? Recall the METR
| study which caught programmers self-reporting they were 20%
| faster with AI when they were actually 20% slower.
| jasonsb wrote:
| Objectively. I'm now tackling tasks I wouldn't have even
| considered two or three years ago, but the biggest
| breakthrough has been overcoming procrastination. When AI
| handles over 50% of the work, there's a 90% chance I'll
| finish the entire task faster than it would normally take
| me just to get started on something new.
| svachalek wrote:
| I don't think it's helped me do anything I couldn't do,
| in fact I've learned it's far easier to do hard things
| myself than trying to prompt an AI out of the ditches it
| will dig trying to do it. But I also find it's great for
| getting painful and annoying tasks out of the way that I
| really can't motivate to do myself.
| wiml wrote:
| I think there might be cases, for some people or some
| tasks, where the difficulty of filling in a blank page is
| greater than the difficulty of fixing an entire page of
| errors. Even if you have to do all the same mental work,
| it _feels_ like a different category of work.
| agency wrote:
| > I'm now tackling tasks I wouldn't have even considered
| two or three years ago
|
| Ok, so subjective
| dotslashmain wrote:
| any objective measure of "productivity" (when it comes to
| knowledge work) is, when you dig down into it enough,
| ultimately subjective.
| moralestapia wrote:
| "Not done" vs "Done" is as objective as it gets.
| miyoji wrote:
| You obviously have never worked a company that spends
| time arguing about the "definition of done". It's one of
| the most subjective topics I know about.
| vorticalbox wrote:
| At work we call this scope creep.
| fbxio wrote:
| It removes ambiguity. Everyone knows when work is truly
| considered done, avoiding rework, surprises, and finger-
| pointing down the line.
| ycombigators wrote:
| Sounds like a company is not adequately defining what the
| deliverables are.
|
| Task: Walk to the shops & buy some milk.
|
| Deliverables: 1. Video of walking to the shops (including
| capturing the newspaper for that day at the local shop)
| 2. Reciept from local store for milk. 3. Physical bottle
| of Milk.
| kenjackson wrote:
| This. I had this long standing dispute that I just never
| had the energy to look up what needed to be done to
| resolve it. I just told it to ChatGPT and it generated
| everything -- including the emails I needed to send and
| who to send them to. Two weeks later and it was taken
| care of. I had sat on it for literally 3 months until
| then.
|
| If I could have something that said, "Here are some
| things that it looks like you're procrastinating on -- do
| you want me to get started on them for you?" -- that
| would probably be crazy useful.
| meowface wrote:
| Exactly. Agentic LLMs are amazing for people who suffer
| from chronic akrasia.
| OtherShrezzing wrote:
| > I'm now tackling tasks I wouldn't have even considered
| two or three years ago
|
| Could you give some examples, and an indication of your
| level of experience in the domains?
|
| The statement has a much different meaning if you were a
| junior developer 2 years ago versus a staff engineer.
| stavros wrote:
| Same. It takes the drudgery out of creating, so I can at
| least start the projects. Then I can go down into the
| detail just enough that the AI doesn't produce crap, but
| without needing to write the actual writes of code
| myself.
|
| Hell, in the past few days I started making something to
| help me write documents for work
| (https://www.writelucid.cc) and a viewer for all my blood
| tests (https://github.com/skorokithakis/bt-viewer), and I
| don't think I would have made either without an LLM.
| lxgr wrote:
| Same here. I've single-shot created a few Raycast plugins
| for TLV decoding that save me several seconds to a few
| minutes per task which I use almost daily at work.
|
| Would have never done that without LLMs.
| logicprog wrote:
| The design of that study is pretty bad, and as a result it
| doesn't end up actually showing what it claims to show /
| what people claim it does.
|
| https://www.fightforthehuman.com/are-developers-slowed-
| down-...
| singron wrote:
| I don't think there is anything factually wrong with this
| criticism, but it largely rehashes caveats that are
| already well explored in the original paper, which goes
| through unusual lengths to clearly explain many ways the
| study is flawed.
|
| The study gets so much attention since it's one of the
| few studies on the topic with this level of rigor on
| real-world scenarios, and it explains why previous
| studies or anecdotes may have claimed perceived increases
| in productivity even if there wasn't any actual
| increases. It clearly sets a standard that we can't just
| ask people if they felt more productive (or they need to
| feel massively more productive to clearly overcome this
| bias).
| CuriouslyC wrote:
| If you want another data point, you can just look at my
| company github
| (https://github.com/orgs/sibyllinesoft/repositories). ~27
| projects in the last 5 weeks, probably on the order of half
| a million lines of code, and multiple significant projects
| that are approaching ship readiness (I need to stop tuning
| algorithms and making stuff gorgeous and just fix
| installation/ensure cross platform is working, lol).
| shafyy wrote:
| Lines of codes is not a measure of anything meaningful on
| its own. The mere fact that you suggest this as prove
| that you are more productive makes me think you are not.
| CuriouslyC wrote:
| The SWE industry is eagerly awaiting your proposed
| accurate metric.
|
| I find that people who dismiss LoC out of hand without
| supplying better metrics tend to be low performers trying
| to run for cover.
| oofbey wrote:
| Loc is so easy to game. Reformat. Check in a notebook.
| Move things around. Pointless refactor.
|
| If nobody is watching loc, it's generally a good metric.
| But as soon as people start valuing it, it becomes
| useless.
| sebastiennight wrote:
| See https://en.wikipedia.org/wiki/Goodhart's_law
|
| and, in the case of "Lines of code" as a metric:
| https://en.wikipedia.org/wiki/Cobra_effect
| dmamills wrote:
| A metric I'd be interested in is the number of clients
| you can convince to use this slop.
| CuriouslyC wrote:
| That's a sales metric brother.
| troupo wrote:
| > The SWE industry is eagerly awaiting your proposed
| accurate metric.
|
| There are none. All are various variant of bad. LoC is
| probably the worst metric of all. Because it says nothing
| about quality, or features, or number of products
| shipped. It's also the easiest metric to game. Just write
| GoF-style Java, and you're off to the races. Don't forget
| to have a source code license at the beginning of every
| file. Boom. LoC.
|
| The only metrics that barely work are:
|
| - features delivered per unit of time. Requires an actual
| plan for the product, and an understanding that some
| features will inevitably take a long time
|
| - number of bugs delivered per unit of time. This one is
| somewhat inversely correlated with LoC and features, by
| the way: the fewer lines of code and/or features, the
| fewer bugs
|
| - number of bugs fixed per unit of time. The faster bugs
| are fixed the better
|
| None of the other bullshit works.
| wiml wrote:
| You're new to the industry, aren't you?
| blks wrote:
| So your company is actively shipping tens of thousands of
| AI-generated lines of code?
| mym1990 wrote:
| Sooo you launched https://sibylline.dev/, which looks
| like a bunch of AI slop, then spun up a bunch of GitHub
| repos, seeded them with more AI slop, and tout that
| you're shipping 500,000 lines of code?
|
| I'll pass on this data point.
| CuriouslyC wrote:
| I should just ignore you because you're obviously a toxic
| decel, but I'm going to remember you and take special joy
| in proving you wrong.
| grayhatter wrote:
| I mean, you're slinging insults so it's hard for me agree
| he's the toxic person in this conversation...
| thegrim33 wrote:
| I don't do Rust or Javascript so I can't judge, but I
| opened a file at random and feel like the commenting
| probably serves as a good enough code smell.
|
| From the one random file I opened:
|
| /// Real LSP server implementation for Lens pub struct
| LensLspServer
|
| /// Configuration for the LSP server
|
| pub struct LspServerConfig
|
| /// Convert search results to LSP locations
|
| async fn search_results_to_locations()
|
| /// Perform search based on workspace symbol request
|
| async fn search_workspace_symbols()
|
| /// Search for text in workspace
|
| async fn search_text_in_workspace()
|
| etc, etc, etc, x1000.
|
| I don't see a single piece of logic actually documented
| with why it's doing what it's doing, or how it works, or
| why values are what they are, nearly 100% of the comments
| are just:
|
| function-do-x() // Function that does x
| oofbey wrote:
| Early coding agents wanted to do this - comment every
| line of code. You used to have to yell at them not to.
| Now they've mostly stopped doing this at all.
| CuriouslyC wrote:
| Sure, this is a reasonable point, but understand that
| documentation passes come late, because if you do heavy
| documentation refinement on a product under
| feature/implementation drift you just end up with a mess
| of stale docs and repeated work.
| sebastiennight wrote:
| First off, congrats on the progress.
|
| Second, as you seem to be an entrepreneur, I would
| suggest you consider adopting the belief that you've not
| been productive until the thing's shipped into prod and
| available for purchase. Until then you've just been
| active.
| jama211 wrote:
| Not this again. That study had serious problems.
|
| But I'm not even going to argue about that. I want to raise
| something no one else seems to mention about AI in coding
| work. I do a lot of work now with AI that I used to code by
| hand, and if you told me I was 20% slower on average, I
| would say "that's totally fine it's still worth it" because
| the EFFORT level from my end feels so much less.
|
| It's like, a robot vacuum might take way longer to clean
| the house than if I did it by hand sure. But I don't regret
| the purchase, because I have to do so much less _work_.
|
| Coding work that I used to procrastinate about because it
| was tedious or painful I just breeze through now. I'm so
| much less burnt out week to week.
|
| I couldn't care less if I'm slower at a specific task, my
| LIFE is way better now I have AI to assist me with my
| coding work, and that's super valuable no matter what the
| study says.
|
| (Though I will say, I believe I have extremely good
| evidence that in my case I'm also more productive, averages
| are averages and I suspect many people are bad at using AI,
| but that's an argument for another time).
| blks wrote:
| Often someone's personal productivity with AI means
| someone else have to dig through their piles of rubbish
| to review PR they committed.
|
| In your particular case it sounds like you're rapidly
| loosing your developer skills, and enjoy that now you
| have to put less effort and think less.
| wussboy wrote:
| We know that relying heavily on Google Maps makes you
| less able to navigate without Google Maps. I don't think
| there's research on this yet, but I would be stunned if
| the same process isn't at play here.
| lukan wrote:
| I know that I am better at navigating with google maps
| than average people, because I navigated for years
| without it (partly on purpose). I know when not to trust
| it. I know when to ignore recommendations on recalculated
| routes.
|
| Same with LLMs. I am better with it, because I know how
| to solve things without the help of it. I understand the
| problem space and the limitations. Also I understand how
| hype works and why they think they need it (investors
| money).
|
| In other words, no, just using google maps or ChatGPT
| does not make me dumb. Only using it and blindly trusting
| it would.
| fbxio wrote:
| Whatever your mind believes it doesn't need to hold on to
| that what is expensive to maintain and run, it'll let go
| of. This isn't entirely accurate from a neuroscience
| perspective but it's kinda ballpark.
|
| Pretty much like muscles decay when we stop using them.
| lxgr wrote:
| Sure, but sticking with that analogy, bicycles haven't
| caused the muscles of people that used to go for walks
| and runs to atrophy either - they now just go much longer
| distances in the same time, with less joint damage and
| more change in scenery :)
| contrariety wrote:
| To extend the analogy further, people who replace all
| their walking and other impact exercises with cycling
| tend to end up with low bone density and then have a much
| higher risk of broken legs when they get older.
| gf000 wrote:
| Well, you still walk in most indoor places, even if you
| are on the bike as much as humanly possible.
|
| But if you were to be literally chained to a bike, and
| could not move in any other way than surely you would
| "forget"/atrophy in specific ways that you wouldn't be
| able to walk without relearning/practicing.
| Taganov wrote:
| Oh, but they _do_ atrophy, and in devious ways. Though
| the muscles under linear load may stay healthy, the
| ability of the body to handle the knee, ankle, and hip
| joints under dynamic and twisting motion _does_ atrophy.
| Worse yet, one may think that they are healthy and
| strong, due to years of biking, and unintentionally
| injure themselves when doing more dynamic sports.
|
| Take my personal experience for whatever it is worth, but
| my knees do not lie.
| stavros wrote:
| I'm losing my developer skills like I lost my writing
| skills when I got a keyboard. Yes, I can no longer write
| with a pen, but that doesn't mean I can't write.
| fbxio wrote:
| I'd love not to have to be great at programming, as much
| as I enjoy not being great at cleaning the canalization.
| But I get what you mean, we do lose some potentially
| valuable skills if we outsource them too often for too
| long.
| lxgr wrote:
| It's probably roughly as problematic as most people not
| being able to fix even simple problems with their cars
| themselves these days (i.e., not very).
| lxgr wrote:
| Another way of viewing it would be that LLMs allow
| software developers to focus their development skills
| where it actually matters (correctness, architecture
| etc.), rather than wasting hours catering to the
| framework or library of the day's configuration
| idiosyncrasies.
|
| That stuff kills my motivation to solve actual problems
| like nothing else. Being able to send off an agent to
| e.g. fix some build script bug so that I can get to the
| actual problem is amazing even with only a 50% success
| rate.
| iLoveOncall wrote:
| The path forward here is to have better frameworks and
| libraries, not to rely on a random token generator.
| danenania wrote:
| My own code quality is far better with AI, because it
| makes it feasible to indulge my perfectionism to a much
| greater degree. I can make sure all the edge cases and
| error paths are covered, the API surface is minimal and
| elegant, performance is optimized, code is well commented
| and fully tested, etc.
|
| Before AI, I usually needed to stop sooner than I would
| have liked to and call it good enough. Now I can justify
| making everything much more robust because it doesn't
| take a lot longer.
| troupo wrote:
| > Not this again. That study had serious problems.
|
| The problem is, there are very few if any other studies.
|
| All the hype around LLMs we are supposed to just believe.
| Any criticism is "this study has serious problems".
|
| > It's like, a robot vacuum might take way longer
|
| > Coding work that I used to procrastinate
|
| Note how your answer to "the study had serious problems"
| is totally problem-free analogies and personal anecdotes.
| keeda wrote:
| _> The problem is, there are very few if any other
| studies._
|
| Not at all, the METR study just got a ton of attention.
| There are tons out there at much larger scales, almost
| all of them showing significant productivity boosts for
| various measures of "productivity".
|
| If you stick to the standard of "Randomly controlled
| trials on real-world tasks" here are a few:
|
| https://papers.ssrn.com/sol3/papers.cfm?abstract_id=49455
| 66 (4867 developers across 3 large companies including
| Microsoft, measuring closed PRs)
|
| https://www.bis.org/publ/work1208.pdf (1219 programmers
| at a Chinese BigTech, measuring LoC)
|
| https://www.youtube.com/watch?v=tbDDYKRFjhk (from
| Stanford, not an RCT, but the largest scale with actual
| commits from 100K developers across 600+ companies, and
| tries to account for reworking AI output. Same guys
| behind the "ghost engineers" story.)
|
| If you look beyond real-world tasks and consider things
| like standardized tasks, there are a few more:
|
| https://ieeexplore.ieee.org/abstract/document/11121676
| (96 Google engineers, but same "enterprise grade" task
| rather than different tasks.)
|
| https://aaltodoc.aalto.fi/server/api/core/bitstreams/dfab
| 4e9... (25 professional developers across 7 tasks at a
| Finnish technology consultancy.)
|
| They all find productivity boosts in the 15 - 30% range
| -- with a ton of nuance, of course. If you look beyond
| these at things like open source commits, code reviews,
| developer surveys etc. you'll find even more evidence of
| positive impacts from AI.
| risyachka wrote:
| Closed PRs, commits, loc etc are useless vanity metrics.
|
| With ai code you have more loc and NEED more PRs to fix
| all its slop.
|
| In the end you have increased numbers with net negative
| effect
| keeda wrote:
| Most of those studies call this out and try to control
| for it (edit: "it" here being the usual limitations of
| LoC and PRs as measures of productivity) where possible.
| But to your point, no, there is still a strong net
| positive effect:
|
| > https://www.youtube.com/watch?v=tbDDYKRFjhk (from
| Stanford, not an RCT, but the largest scale with actual
| commits from 100K developers across 600+ companies, and
| tries to _account for reworking AI output_. Same guys
| behind the "ghost engineers" story.)
|
| Emphasis added. They modeled a way to detect when AI
| output is being reworked, and still find a 15-20%
| increase in throughput. Specific timestamp:
| https://youtu.be/tbDDYKRFjhk?t=590&si=63qBzP6jc7OLtGyk
| tunesmith wrote:
| Data point: I run a site where users submit a record. There
| was a request months ago to allow users to edit the record
| after submitting. I put it off because while it's an
| established pattern it touches a lot of things and I found
| it annoying busy work and thus low priority. So then
| gpt5-codex came out and allowed me to use it in codex cli
| with my existing member account. I asked it to support edit
| for that feature all the way through the backend with a
| pleasing UI that fit my theme. It one-shotted it in about
| ten minutes. I asked for one UI adjustment that I decided I
| liked better, another five minutes, and I reviewed and
| released it to prod within an hour. So, you know, months
| versus an hour.
| paulryanrogers wrote:
| Is the hour really comparable to months spent _not_
| working on it?
| bozhark wrote:
| Yes, for me.
|
| Instead of getting overwhelmed doing to many things, I can
| offload a lot of menial and time-driven tasks
|
| Reviews are absolutely necessary but take less time than
| creation
| james_marks wrote:
| Yesterday is a good example- in 2 days, I completed what I
| expected to be a week's worth of heads-down coding. I had
| to take a walk and make all new goals.
|
| The right AI, good patterns in the codebase and 20 years of
| experience and it is _wild_ how productive I can be.
|
| Compare that to a few years ago, when at the end of the
| week, it was the opposite.
| rpdillon wrote:
| My personal project output has gone up dramatically since I
| started using AI, because I can now use times of night
| where I'm otherwise too mentally tired, to work with AI to
| crank through a first draft of a change that I can then
| iterate on later. This has allowed me to start actually
| implementing side projects that I've had ideas about for
| years and build software for myself in a way I never could
| previously (at least not since I had kids).
|
| I know it's not some amazing GDP-improving miracle, but in
| my personal life it's been incredibly rewarding.
| animex wrote:
| This, 1000x.
|
| I had a dozen domains and projects on the shelf for years
| and now 8 of them have significant active development.
| I've already deployed 2 sites to production. My github
| activity is lighting up like a Christmas tree.
| boogieknite wrote:
| i find a lot of value in using it to give half baked
| ideas momentum. some sort of "shower thought" will occur
| to me for a personal project while im at work and ill
| prompt Claude code to analyze and demonstrate an
| implementation for review later
|
| on the other hand i believe my coworker may have taken it
| too far. it seems like productivity has significantly
| slipped. in my perception the approaches hes using are
| convoluted and have no useful outcome. im almost worried
| about him because his descriptions of what hes doing make
| no sense to me or my teammates. hes spending a lot of
| time on it. im considering telling him to chill out but
| who knows, maybe im just not as advanced a user as him?
| anyone have experience with this?
| yokoprime wrote:
| I'm objectively faster. Not necessarily if I'm working on a
| task I've done routinely for years, but when taking on new
| challenges I'm up and running much faster. A lot of it have
| to do with me offloading doing the basic research while
| allowing myself to be interrupted; it's not a problem that
| people reach out with urgent matters while I'm taking on a
| challenge I've only just started to build towards. Being
| able to correct the ai where I can tell it's making false
| assumptions or going off the rails helps speed things up
| Kiro wrote:
| The "you only think you're more productive" argument is
| tiresome. Yes, I know for sure that I'm more productive.
| There's nothing uncertain about it. Does it lead to other
| problems? No doubt, but claiming my productivity gains are
| imaginary is not serious.
|
| I've seen a lot of people who previously touted that it
| doesn't work at all use that study as a way to move the
| goalpost and pretend they've been right all along.
| m_fayer wrote:
| We're being accused of false consciousness!
| chrysoprace wrote:
| I would be interested to know how you measure your
| productivity gains though, in an objective way where
| you're not the victim of bias.
|
| I just recently had to rate whether I felt like I got
| more done by leaning more on Claude Code for a week to do
| a toy project and while I _feel_ like I was more
| productive, I was already biased to think so, and so I
| was a lot more careful with my answer, especially as I
| had to spend a considerable amount of time either
| reworking the generated code or throwing away several
| hours of work because it simply made things up.
| Kiro wrote:
| It sounds like you're very productive without AI or that
| your perceived gains are pretty small. To me, it's such a
| stark contrast that asking how I measure it is like
| asking me to objectively verify that a car is faster than
| walking. I could do it but it would be absurd.
| m_fayer wrote:
| It seems like the programming world is increasingly
| dividing into "LLMs for coding are at best marginally
| useful and produce huge tech debt" vs "LLMs are a game
| changing productivity boost".
|
| I truly don't know how to account for the discrepancy, I
| can imagine many possible explanations.
|
| But what really gets my goat is how political this debate
| is becoming. To the point that the productivity-camp, of
| which I'm a part, is being accused of deluding themselves.
|
| I get that OpenAI has big ethical issues. And that there's
| a bubble. And that ai is damaging education. And that it
| may cause all sorts of economic dislocation. (I
| emphatically Do Not get the doomers, give me a break).
|
| But all those things don't negate the simple fact that for
| many of us, LLMs are an amazing programming tool, and we've
| been around long enough to distinguish substance from
| illusion. I don't need a study to confirm what's right in
| front of me.
| tomrod wrote:
| I'm not who you responded to. I see about a 40% to 60%
| speed up as a solution architect when I sit down to code
| and about a 20% speedup when building/experimenting with
| research artifacts (I write papers occasionally).
|
| I have always been a careful tester, so my UAT hasn't blown
| up out of proportion.
|
| The big issue I see is rust it generates code using
| 2023-recent conventions, though I understand there is some
| improvement in thst direction.
|
| Our hiring pipeline is changing dramatically as well, since
| the normal things a junior needs to know (code, syntax) is
| no longer as expensive. Joel Spolsky's mantra to higher
| curious people who get things done captures well the folks
| I find are growing well as juniors.
| citizenkeen wrote:
| I have a very big hobby code project I've been working on
| for years.
|
| AI has not made me much more productive at work.
|
| I can only work on my hobby project when I'm tired after
| the kids go to bed. AI has made me 3x productive there
| because reviewing code is easier than architecting. I can
| sense if it's bad, I have good tests, the requests are
| pretty manageable (make a new crud page for this DTO using
| app conventions).
|
| But at work where I'm fresh and tackling hard problems that
| are 50% business political will? If anything it slows me
| down.
| swalsh wrote:
| " Blockchain is probably the most useless technology ever
| invented "
|
| Actually AI may be more like blockchain then you give it
| credit for. Blockchain feels useless to you because you
| either don't care about or value the use cases it's good for.
| For those that do, it opens a whole new world they eagerly
| look forward to. As a coder, it's magical to describe a
| world, and then to see AI build it. As a copyeditor it may be
| scary to see AI take my job. Maybe you've seen it hilucinate
| a few times, and you just don't trust it.
|
| I like the idea of interoperable money legos. If you hate
| that, and you live in a place where the banking system is
| protected and reliable, you may not understand blockchain. It
| may feel useless or scary. I think AI is the same. To some
| it's very useful, to others it's scary at best and useless at
| worst.
| yieldcrv wrote:
| "I'm not the target audience and I would never do the
| convoluted alternative I imagined on the spot that I think
| are better than what blockchain users do"
| esafak wrote:
| People in countries with high inflation or where the
| banking system is unreliable are not using blockchains,
| either.
| eric_cc wrote:
| Do you have any proof to support this claim? Stable coins
| use alone is in the 10's (possibly hundreds now) of
| billions in daily transaction globally. I'd be interested
| to hear your source for your claim.
| esafak wrote:
| I'm from a high inflation country. Let's see your
| evidence of use in such countries, since you are throwing
| numbers out.
| boc wrote:
| Blockchain is essentially useless.
|
| You need legal systems to enforce trust in societies, not
| code. Otherwise you'll end up with endless $10 wrench
| attacks until we all agree to let someone else hold our
| personal wealth for us in a secure, easy-to-access place.
| We might call it a bank.
|
| The end state of crypto is always just a nightmarish
| dystopia. Wealth isn't created by hoarding digital
| currency, it's created by productivity. People just think
| they found a shortcut, but it's not the first (or last)
| time humans will learn this lesson.
| anonandwhistle wrote:
| Humanities biggest ever wealth storing thing is literally
| a ROCK
| red369 wrote:
| These?
|
| https://en.wikipedia.org/wiki/Rai_stones
|
| The first photo in Wikipedia is great. I wonder how often
| foreigners bought them and then lugged them back home to
| have in their garden.
| abraxas wrote:
| I call blockchain an instantiation of Bostrom's Paperclip
| Maximizer running on a hybrid human-machine topology.
|
| We are burning through scarce fuel in amounts sufficient
| to power a small developed nation in order to reverse
| engineer... one way hashcodes! Literally that is even
| less value than turning matter into paperclips.
| wat10000 wrote:
| It may not be the absolute most useless, but it's awfully
| niche. You can use it to transfer money if you live
| somewhere with a crap banking system. And it's very useful
| for certain kinds of crime. And that's about it, after
| almost two decades. Plenty of other possibilities have been
| proposed and attempted, but nothing has actually stuck.
| (Remember NFTs? That was an amusing few weeks.) The
| technology is interesting and cool, but that's different
| from being useful. LLM chatbots are already way more
| generally useful than that and they're only three years
| old.
| 9rx wrote:
| _> AI is a powerful tool for those who are willing to put in
| the work._
|
| No more powerful than I without the A. The only advantage AI
| has over I is that it is cheaper, but that's the appeal of
| the blockchain as well: It's cheaper than VISA.
|
| The trouble with the blockchain is that it hasn't figured out
| how to be useful generally. Much like AI, it only works in
| certain niches. The past interest in the blockchain was
| premised on it reaching its "AGI" moment, where it could
| completely replace VISA at a much lower cost. We didn't get
| there and then interest started to wane. AI too is still
| being hyped on future prospects of it becoming much more
| broadly useful and is bound to face the same crisis as the
| blockchain faced if AGI doesn't arrive soon.
| fn-mote wrote:
| Blockchain only solves one problem Visa solves:
| transferring funds. It doesn't solve the other problems
| that Visa solves. For example, there is no way to get
| restitution in the case of fraud.
| 9rx wrote:
| Yes, that is one of the reasons it hasn't been able to be
| used generally. But AI can't be used generally either.
| Both offer niche solutions for those with niche problems,
| but that's about it. They very much do feel the same, and
| they are going to start feeling even more the same if AGI
| doesn't arrive soon. Don't let the niche we know best
| around here being one of the things AI is helping to
| solve cloud your vision of it. The small few who were
| able to find utility in the blockchain thought it was
| useful too
| buffalobuffalo wrote:
| Blockchain only has 2 legitimate uses (from an economic
| standpoint) as far as I can tell.
|
| 1) Bitcoin figured out how to create artificial scarcity,
| and got enough buy-in that the scarcity actually became
| valuable.
|
| 2)Some privacy coins serve an actual economic niche for
| illegal activity.
|
| Then there's a long list of snake oil uses, and competition
| with payment providers doesn't even crack the top 20 of
| those. Modern day tulip mania.
| 9rx wrote:
| Sounds like LLMs. The legitimate uses are:
|
| 1) Langauge tasks.
|
| 2) ...
|
| I can't even think of what #2 is. If the technology gets
| better at writing code perhaps it can start to do other
| things by way of writing software to do it, but then you
| effectively have AGI, so...
| jama211 wrote:
| But an I + and AI (as in a developer with access to AI
| tools) is as near as makes no difference the same price as
| just an I, and _can_ be better than just an I.
| oblio wrote:
| > My personal productivity has skyrocketed in the last 12
| months.
|
| If you don't mind me asking, what do you do?
| domatic1 wrote:
| >> Blockchain is probably the most useless technology ever
| invented
|
| so useless there is almost $3 Trillion of value on
| blockchains.
| unbalancedevh wrote:
| Unfortunately, the amount of money invested in something
| isn't indicative of it's utility. For example: the tulip
| mania, beanie babies, NFTs, etc.
| davidcbc wrote:
| No there isn't. These ridiculous numbers are made up by
| taking the last price a coin sold for and multiplying it by
| all coins. If I create a shitcoin with 1 trillion coins and
| then sell one to a friend for $1 I've suddenly created a
| coin with $1 trillion in "value"
| antihero wrote:
| It sounds like you are lacking inspiration. AI is a tool for
| making your ideas happen not giving you ideas.
| Terr_ wrote:
| > the most useless technology
|
| Side-rant pet-peeve: People who try to rescue the reputation
| of "Blockchain" as a promising way forward by saying its
| weaknesses go away once you do a "private blockchain."
|
| This is equivalent to claiming the self-balancing Segway
| vehicles are still the future, they just need to be "improved
| even more" by adding another set of wheels, an enclosed
| cabin, and disabling the self-balancing feature.
|
| Congratulations, you've backtracked back to a classic
| [distributed database / car].
| eric_cc wrote:
| > Blockchain is probably the most useless technology ever
| invented (unless you're a criminal or an influencer who makes
| ungodly amounts of money off of suckers)
|
| This is an incredibly uneducated take on multiple levels. If
| you're talking about Bitcoin specifically, even though you
| said "blockchain", I can understand this as a political
| talking about 8 years ago. But you're still banging this drum
| despite the current state of affairs? Why not have the
| courage to say you're politically against it or bitter or
| whatever your true underlying issue is?
| snicky wrote:
| How is the current state of affairs different from 8 years
| ago? I don't want to argue, just a genuine question,
| because I don't follow much what's happening in the
| blockchain universum.
| dakiol wrote:
| > I'm rapidly losing interest in all of these tools
|
| Same. It reminds me the 1984 event in which the computer itself
| famously "spoke" to the audience using its text-to-speech
| feature. Pretty amazing at that time, but nevertheless quite
| useless since then
| ElFitz wrote:
| It has proven very useful to a great number of people who,
| although they are a minority, have vastly benefited from TTS
| and other accessibility features.
| MountDoom wrote:
| I think it's easy to pick apart arguments out of context,
| but since the parent is comparing it to AI, I assume what
| they meant is that it hasn't turned out to be nearly as
| revolutionary for general-purpose computing as we thought.
|
| Talking computers became an ubiquitous sci-fi trope. And in
| reality... even now, when we have nearly-flawless natural
| language processing, most people prefer to text LLMs than
| to talk to them.
|
| Heck, we usually prefer texting to calling when interacting
| with other people.
| jama211 wrote:
| Text to speech has been an incredible breakthrough for many
| with vision, visual processing, or speech disabilities. You
| take that back.
|
| Stephen Hawking without text to speech would've been mute.
| simianwords wrote:
| > It feels like blockchain again in a lot of weird ways.
|
| Every time I keep seeing this brought up I wonder if people
| truly mean this or its just something people say but don't
| mean. AI is obviously different and extremely useful.. I mean
| it has convinced a butt load of people to pay for the
| subscription. Every one I know including the non technical ones
| use it and some of them pay for it, and it didn't even require
| advertising! People just use it because they like it.
| bonoboTP wrote:
| Obviously a lot of grifters and influencers shifted from NFTs
| to AI, but the comparison ends there. AI is being used by
| normal people and professionals every day. In comparison, the
| number of people who ever interacted with blockchain is
| basically zero. (And that's a lifetime vs daily comparison)
|
| It's a lazy comparison, and most likely fueled by a generic
| aversion to "techbros".
| brooke2k wrote:
| "It has convinced a bunch of people to spend money" is also
| true of blockchain, so I don't know if that's a good argument
| to differentiate the two.
| simianwords wrote:
| The extent matters. Do you think we need a good argument to
| differentiate Netflix?
| jedbrooke wrote:
| > I tried to ask GPT-5 pro the other day to just pick an
| ambitious project it wanted to work on, and I'd carry out
| whatever physical world tasks it needed me to, and all it did
| was just come up with project plans which were rehashes of my
| prior projects framed as its own.
|
| Mate, I think you've got the roles of human and AI reversed.
| Humans are supposed to come up with creative ideas and let
| machines do the tedious work of implementation. That's a bit
| like asking a calculator what equations you should do or a DB
| what queries you should make. These tools exist to serve us,
| not the other way around
|
| GPT et al. can't "want" anything, they have no volition
| carabiner wrote:
| Yeah I've tried some of the therapy prompts, "Ask me 7
| questions to help me fix my life, then provide insights." And
| it just gives me a generic summary of the top 5 articles you'd
| get if you googled "how to fix depression, social anxiety" or
| something.
| ip26 wrote:
| Argue with it. Criticize it. Nitpick the questions it asked.
| Tell it what you just said:
|
| _you just gave me a generic summary of the top 5 articles
| you 'd get if you googled "how to fix depression, social
| anxiety" or something_
|
| When you open the prompt the first time it has zero context
| on _you_. I 'm not an LLM-utopist, but just like with a human
| therapist you need to give it more context. Even arguing with
| it is context.
| input_sh wrote:
| I do, frequently, and ChatGPT in particular gets stuck in a
| loop where it specifically ignores whatever I write and
| repeats the same thing over and over again.
|
| To give a basic example, ask it to list some things and
| then ask it to provide more examples. It's gonna be
| immediately stuck in a loop and repeat the same thing over
| and over again. Maybe one of the 10 examples it gives you
| is different, but that's gonna be a false match for what
| I'm looking for.
|
| This alone makes it as useful as clicking on the first few
| results myself. It doesn't refine its search, it doesn't
| "click further down the page", it just wastes my time. It's
| only as useful as the first result it gives, this idea of
| arguing your way to better answers has never happened to me
| in practice.
| carabiner wrote:
| I did, and I gave it lots of detailed, nuanced answers
| about my life specifics. I spent an hour answering its
| questions and the end result was it telling me to watch the
| movie "man called otto" which I had already done (and
| hated) among other pablum.
| pickledonions49 wrote:
| Agreed. I think AI can be a good tool, but not many people are
| doing very original stuff. Plus, there are many things I would
| prefer be greeted with, other than by an algorithm in the
| morning.
| afro88 wrote:
| I got in a Waymo today and asked it where it wanted to go. It
| tried to suggest places I wanted to go. This technology just
| isn't there.
|
| /s
| melenaboija wrote:
| Holy guacamole. It is amazing all the BS these people are able to
| create to keep the hype of the language models' super powers.
|
| But well I guess they have committed 100s of billions of future
| usage so they better come up with more stuff to keep the wheels
| spinning.
| r0fl wrote:
| If you press the button to read the article to you all you hear
| is "object, object, object..."
| yesfitz wrote:
| Yeah, a 5 second clip of the word "Object" being inflected like
| it's actually speaking.
|
| But also it ends with "...object ject".
|
| When you inspect the network traffic, it's pulling down 6 .mp3
| files which contain fragments of the clip.
|
| And it seems like the feature's broken for the whole site. The
| Lowes[1] press release is particularly good.
|
| Pretty interesting peek behind the curtain.
|
| 1: https://openai.com/index/lowes/
| DonHopkins wrote:
| Thank you! I have preserved this precious cultural artifact:
|
| https://archive.org/details/object-object
|
| http://donhopkins.com/home/movies/ObjectObject.mp4
|
| Original mp4 files available for remixing:
|
| http://donhopkins.com/home/movies/ObjectObject.zip
|
| >Pretty interesting peek behind the curtain.
|
| It's objects all the way down!
| vunderba wrote:
| It's especially funny if you play the audio MP3 file and the
| video presentation at the same time - the "Object" narration
| almost lines up with the products being presented.
|
| It's like a hilariously vague version of Pictionary.
| datadrivenangel wrote:
| Sounds like someone had an off by one error in their array
| slicing and passed the wrong thing into the voice to text!
| DonHopkins wrote:
| ChatGPT IV
| xattt wrote:
| Episodes from Liberty City?
| dlojudice wrote:
| I see some pessimism in the comments here but honestly, this kind
| of product is something that would make me pay for ChatGPT again
| (I already pay for Claude, Gemini, Cursor, Perplexity, etc.). At
| the risk of lock-in, a truly useful assistant is something I
| welcome, and I even find it strange that it didn't appear sooner.
| thenaturalist wrote:
| Truly useful?
|
| Personal take, but the usefulness of these tools to me is
| greatly limited by their knowledge latency and limited
| modality.
|
| I don't need information overload on what playtime gifts to buy
| my kitten or some semi-random but probably not very practical
| "guide" on how to navigate XYZ airport.
|
| Those are not useful tips. It's drinking from an information
| firehose that'll lead to fatigue, not efficiency.
| furyofantares wrote:
| I doubt there would be this level of pessimism if people
| thought this is a progress toward a truly useful assistant.
|
| Personally it sounds negative value. Maybe a startup that's not
| doing anything else could iterate on something like this into a
| killer app, but my expectation that OpenAI can do so is very,
| very low.
| simianwords wrote:
| Pessimism is how people now signal their savviness or status.
| My autistic brain took some time to understand this nuance.
| exitb wrote:
| Wow, did ChatGPT come up with that feature?
| ibaikov wrote:
| Funny, I pitched a much more useful version of this like two
| years ago with clear use-cases and value proposition
| anon-3988 wrote:
| LLMs are increasingly part of intimate conversations. That
| proximity lets them learn how to manipulate minds.
|
| We must stop treating humans as uniquely mysterious. An
| unfettered market for attention and persuasion will encourage
| people to willingly harm their own mental lives. Think social
| medias are bad now? Children exposed to personalized LLMs will
| grow up inside many tiny, tailored realities.
|
| In a decade we may meet people who seem to inhabit alternate
| universes because they've shared so little with others. They are
| only tethered to reality when it is practical for them (to get on
| busses, the distance to a place, etc). Everything else? I have no
| idea how to have a conversation with someone else anymore. They
| can ask LLMs to generate a convincing argument for them all day,
| and the LLMs would be fine tuned for that.
|
| If users routinely start conversations with LLMs, the negative
| feedback loop of personalization and isolation will be complete.
|
| LLMs in intimate use risk creating isolated, personalized
| realities where shared conversation and common ground collapse.
| TimTheTinker wrote:
| > Children exposed to personalized LLMs will grow up inside
| many tiny, tailored realities.
|
| It's like the verbal equivalent of The Veldt by Ray
| Bradbury.[0]
|
| [0] https://www.libraryofshortstories.com/onlinereader/the-
| veldt
| lawlessone wrote:
| With the way LLMs are affecting paranoid people by agreeing
| with their paranoia it feels like we've created schizophrenia
| as a service.
| ip26 wrote:
| It doesn't have to be that way of course. You could envision an
| LLM whose "paperclip" is coaching you to become a great "xyz".
| Record every minute of your day, including your conversations.
| Feed it to the LLM. It gives feedback on what you did wrong,
| refuses to be your social outlet, and demands you demonstrate
| learning in the next day before it rewards with more attention.
|
| Basically, a fanatically devoted life coach that doesn't want
| to be your friend.
|
| The challenge is the incentives, the market, whether such an
| LLM could evolve and garner reward for serving a market need.
| DenisM wrote:
| Have you tried building this with prepromts? That would be
| interesting!
| achierius wrote:
| If that were truly the LLM's "paperclip", then how far would
| it be willing to go? Would it engage in cyber-crime to
| surreptitiously smooth your path? Would it steal? Would it be
| willing to hurt other people?
|
| What if you no longer want to be a great "xyz"? What if you
| decide you want to turn it off (which would prevent it from
| following through on its goal)?
|
| "The market" is not magic. "The challenge is the incentives"
| sounds good on paper but in practice, given the current state
| of ML research, is about as useful to us as saying "the
| challenge is getting the right weights".
| khaledh wrote:
| Product managers live in a bubble of their own.
| tptacek wrote:
| Jamie Zawinksi said that every program expands until it can read
| email. Similarly, every tech company seems to expand until it has
| recapitulated the Facebook TL.
| thenaturalist wrote:
| Let the personal ensloppification begin!
| iLoveOncall wrote:
| This is a joke. How are people actually excited or praising a
| feature that is literally just collecting data for the obvious
| purpose of building a profile and ultimately showing ads?
|
| How tone deaf does OpenAI have to be to show "Mind if I ask
| completely randomly about your travel preferences?" in the main
| announcement of a new feature?
|
| This is idiocracy to the ultimate level. I simply cannot fathom
| that any commenter that does not have an immediate extremely
| negative reaction about that "feature" here is anything other
| than an astroturfer paid by OpenAI.
|
| This feature is literal insanity. If you think this is a good
| feature, you ARE mentally ill.
| asdev wrote:
| Why they're working on all the application layer stuff is beyond
| me, they should just be heads down on making the best models
| lomase wrote:
| They would if it were posible.
| 1970-01-01 wrote:
| Flavor-of-the-week LLMs sell better than 'rated best vanilla'
| LLMs
| swader999 wrote:
| Moat
| iLoveOncall wrote:
| Because they've hit the ceiling a couple of years ago?
| ttoinou wrote:
| They can probably do both with all the resources they have
| TriangleEdge wrote:
| I see OpenAI is entering the phase of building peripheral
| products no one asked for. Another widget here and there. In my
| experience, when a company stops innovating, this usually
| happens. Time for OpenAI to spend 30 years being a trillon dollar
| company and delivering 0 innovations akin to Google.
| simianwords wrote:
| Last mile delivery of foundational models is part of
| innovating. Innovation didn't stop when transistors were
| invented - innovation was bringing this technology to the
| masses in the form of Facebook, Google Search, Maps and so on.
| Dilettante_ wrote:
| The handful of other commenters that brough it up are right: This
| is gonna be absolutely devastating for the "wireborn spouse", "I
| disproved physics" and "I am the messiah" crowd's mental health.
| But:
|
| I personally could see myself getting something like "Hey, you
| were studying up on SQL the other day, would you like to do a
| review, or perhaps move on to a lesson about Django?"
|
| Or take AI-assisted "therapy"/skills training, not that I'd
| particularly endorse that at this time, as another example:
| Having the 'bot "follow up" on its own initiative would certainly
| aid people who struggle with consistency.
|
| I don't know if this is a saying in english as well: "Television
| makes the dumb dumber and the smart smarter." LLMs are shaping up
| to be yet another obvious case of that same principle.
| iLoveOncall wrote:
| > This is gonna be absolutely devastating for the "wireborn
| spouse", "I disproved physics" and "I am the messiah" crowd's
| mental health.
|
| > I personally could see myself getting something like [...]
| AI-assisted "therapy"
|
| ???
| Dilettante_ wrote:
| I edited the post to make it more clear: I could see myself
| having ChatGPT prompt me about the SQL stuff, and the
| "therapy" (basic dbt or cbt stuff is not too complicated to
| coach someone for and can make a real difference, from what I
| gather) would be another way that I could see the technology
| being useful, not necessarily one I would engage with.
| labrador wrote:
| No desktop version. I know I'm old, but do people really do
| serious work on small mobile phone screens? I love my glorious
| 43" 4K monitor, I hate small phone screens but I guess that's
| just me.
| rkomorn wrote:
| Like mobile-only finance apps... because what I definitely
| don't want to do is see a whole report in one page.
|
| No, I obviously prefer scrolling between charts or having to
| swipe between panes.
|
| It's not just you, and I don't think it's just us.
| meindnoch wrote:
| Most people don't use desktops anymore. At least in my friend
| circles, it's 99% laptop users.
| BhavdeepSethi wrote:
| I don't think they meant desktops in the literal sense.
| Laptop with/without monitors is effectively considered
| desktop now (compared to mobile web/apps).
| calmoo wrote:
| these days, desktop == not a mobile phone
| ducttape12 wrote:
| This isn't about doing "serious" work, it's about making
| ChatGPT the first thing you interact with in the day (and
| hopefully something you'll keep coming back to)
| labrador wrote:
| I don't wake up and start talking to my phone. I make myself
| breakfast/coffee and sit down in front of my window on the
| world and start exploring it. I like the old internet, not
| the curated walled gardens of phone apps.
| rchaud wrote:
| Plenty of people open Reels or Tiktok the second they wake
| up. Mobile means notifications, and of you see one as soon
| as you turn off the alarm, you're more likely to open the
| app.
| teaearlgraycold wrote:
| Do I have a problem if HN is the first thing I open?
| labrador wrote:
| > Plenty of people open Reels or Tiktok the second they
| wake up
|
| Yikes, that would be a nightmarish way to start my day. I
| like to wake up and orient myself to the world before I
| start engaging with it. I often ponder dreams I woke up
| with to ask myself what they might mean. What you
| describe sounds like a Black Mirror episode to me where
| your mind isn't even your own and you never really wake
| up.
| thekevan wrote:
| I wish it had the option to make a pulse weekly or even monthly.
| I generally don't want my AI to be proactive at a personal level
| despite it being useful at a business level.
|
| My wants are pretty low level. For example, I give it a list of
| bands and performers and it checks once a week to tell me if any
| of them have announced tour dates within an hour or two of me.
| apprentice7 wrote:
| To be honest, you don't even need AI for something like that.
| You might just write a script to automate that kind of thing
| which is no more than a scrape-and-notify logic.
| ripped_britches wrote:
| Wow so much hate in this thread
|
| For me I'm looking for an AI tool that can give me morning news
| curated to my exact interests, but with all garbage filtered out.
|
| It seems like this is the right direction for such a tool.
|
| Everyone saying "they're out of ideas" clearly doesn't understand
| that they have many pans on the fire simultaneously with
| different teams shipping different things.
|
| This feature is a consumer UX layer thing. It in no way slows
| down the underlying innovation layer. These teams probably don't
| even interface much.
|
| ChatGPT app is merely one of the clients of the underlying
| intelligence effort.
|
| You also have API customers and enterprise customers who also
| have their own downstream needs which are unique and unrelated to
| R&D.
| simianwords wrote:
| Not sure why this is downvoted but I essentially agree. There's
| a lot of UX layer products and ideas that are not explored. I
| keep seeing comments like "AI is cool but the integration is
| lacking" and so on. Yes that is true and that is exactly what
| this is solving. My take has always been that the models are
| good enough now and its time for UX to catch up. There are so
| many ideas not explored.
| kamranjon wrote:
| Can this be interpreted as anything other than a scheme to charge
| you for hidden token fees? It sounds like they're asking users to
| just hand over a blank check to OpenAI to let it use as many
| tokens as it sees fit?
|
| "ChatGPT can now do asynchronous research on your behalf. Each
| night, it synthesizes information from your memory, chat history,
| and direct feedback to learn what's most relevant to you, then
| delivers personalized, focused updates the next day."
|
| In what world is this not a huge cry for help from OpenAI? It
| sounds like they haven't found a monetization strategy that
| actually covers their costs and now they're just basically asking
| for the keys to your bank account.
| OfficialTurkey wrote:
| We don't charge per token in chatgpt
| throwuxiytayq wrote:
| No, it isn't. It makes no sense and I can't believe you would
| think this is a strategy they're pursuing. This is a Pro/Plus
| account feature, so the users don't pay anything extra, and
| they're planning to make this free for everyone. I very much
| doubt this feature would generate a lot of traffic anyway -
| it's basically one more message to process per day.
|
| OpenAI clearly recently focuses on model cost effectiveness,
| with the intention of making inference nearly free.
|
| What do you think the weekly limit is on GPT-5-Thinking usage
| on the $20 plan? Write down a number before looking it up.
| kamranjon wrote:
| If you think that inference at OpenAI is nearly free, then I
| got a bridge to sell you. Seriously though this is not
| speculation, if you look at the recent interview with Altman
| he pretty explicitly states that they underestimated that
| inference costs would dwarf training costs - and he also
| stated that the one thing that could bring this house of
| cards down is if users decide they don't actually want to pay
| for these services, and so far, they certainly have not
| covered costs.
|
| I admit that I didn't understand the Pro plan feature (I
| mostly use the API and assumed a similar model) but I think
| if you assume that this feature will remain free or that its
| costs won't be incurred elsewhere, you're likely ignoring the
| massive buildouts of data centers to support inference that
| is happening across the US right now.
| Imnimo wrote:
| It's very hard for me to envision something I would use this for.
| None of the examples in the post seem like something a real
| person would do.
| ImPrajyoth wrote:
| Someone at open ai definitely said: Let's connect everything to
| gpt. That's it. AGI
| casey2 wrote:
| AI doesn't have a pulse. Am I the only one creeped out by
| personification of tech?
| 9rx wrote:
| "Pulse" here comes from the newspaper/radio lineage of the
| word, where it means something along the lines of timely,
| rhythmic news delivery. Maybe there is reason to be creeped out
| by journalists from centuries ago personifying their work, but
| that has little to do with tech.
| andrewmutz wrote:
| Big tech companies today are fighting over your attention and
| consumers are the losers.
|
| I hate this feature and I'm sure it will soon be serving up
| content that is as engaging as the stuff the comes out of the big
| tech feed algorithms: politically divisive issues, violent and
| titillating news stories and misinformation.
| bob1029 wrote:
| > Pulse introduces this future in its simplest form: personalized
| research and timely updates that appear regularly to keep you
| informed. Soon, Pulse will be able to connect with more of the
| apps you use so updates capture a more complete picture of your
| context. We're also exploring ways for Pulse to deliver relevant
| work at the right moments throughout the day, whether it's a
| quick check before a meeting, a reminder to revisit a draft, or a
| resource that appears right when you need it.
|
| This reads to me like OAI is seeking to build an advertising
| channel into their product stack.
| DarkNova6 wrote:
| Yes, this already reads like the beginning of the end. But I am
| personally pretty happy using Mistral so far and trust Altman
| only as far as I could throw him.
| fleischhauf wrote:
| how strong are you ?
| WmWsjA6B29B4nfk wrote:
| > OpenAI won't start generating much revenue from free users
| and other products until next year. In 2029, however, it
| projects revenue from free users and other products will reach
| $25 billion, or one-fifth of all revenue.
| TZubiri wrote:
| Nono, not OAI, they would never do that, it's OpenAI
| Personalization LLC, a sister of the subsidiary branch of
| OpenAI Inc.
| umeshunni wrote:
| https://www.adweek.com/media/openai-chatgpt-ads-job-listing-...
| Insanity wrote:
| I'm a pro user.. but this just seems like a way to make sure
| users engage more with the platform. Like how social media apps
| try to get you addicted and have them always fight for your
| attention.
|
| Definitely not interested in this.
| pton_xd wrote:
| Yesterday was a full one -- you powered through a lot and kept
| yourself moving at a fast pace.
|
| Might I recommend starting your day with a smooth and creamy
| Starbucks(tm) Iced Matcha Latte? I can place the order and have
| it delivered to your doorstep.
| dvrj101 wrote:
| so GPT tiktok in nutshell
| mvieira38 wrote:
| Why?
| throwacct wrote:
| They're really trying everything. They need the Google/Apple
| ecosystem to compete against them. Fb is adding LLMs to all its
| products, too. Personally, I stopped using ChatGPT months ago in
| favor of other services, depending on what I'm trying to
| accomplish.
|
| Luckily for them, they have a big chunk of the "pie", so they
| need to iterate and see if they can form a partnership with Dell,
| HP, Canonical, etc, and take the fight to all of their
| competitors (Google, Microsoft, etc.)
| brap wrote:
| >Fb is adding LLMs to all its products, too.
|
| FB's efforts so far have all been incredibly lame. AI shines in
| productivity and they don't have any productivity apps. Their
| market is social which is arguably the last place you'd want to
| push AI (this hasn't stopped them from trying).
|
| Google, Apple and Microsoft are the only ones in my opinion who
| can truly capitalize on AI in its current state, and G is
| leading by a huge margin. If OAI and the other model companies
| want to survive, long term they'd have to work with MSFT or
| Apple.
| groby_b wrote:
| I'm feeling obliged to rehash a quote from the early days of the
| Internet, when midi support was added: "If I wanted your web site
| to make sounds, I'd rub my finger on the screen"
|
| Behind that flippant response lies a core principle. A computer
| is a tool. It should act on the request of the human using it,
| not by itself.
|
| Scheduled prompts: Awesome. Daily nag screens to hook up more
| data sources: Not awesome.
|
| (Also, from a practical POV: So they plan on creating a
| recommender engine to sell ads and media, I guess. Weehee. More
| garbage)
| sequoia wrote:
| Here's a free product enhancement for OpenAI if they're not
| already doing this:
|
| A todo app that reminds you of stuff. say "here's the stuff I
| need to do, dishes, clean cat litter fold laundry and put it
| away, move stuff to dryer then fold that when it's done etc."
| then it asks about how long these things take or gives you
| estimates. Then (here's the feature) it _checks in with you_ at
| intervals: "hey it's been 30 minutes, how's it going with the
| dishes?"
|
| This is basically "executive function coach." Or you could call
| it NagBot. Either way this would be extremely useful, and it's
| mostly just timers & push notifications.
| DenisM wrote:
| This will drive the opposite of user engagement.
| dalmo3 wrote:
| The one modern thing that didn't have a feed, and (in the best
| case) just did what you asked.
|
| Next week: ChatGPT Reels.
| jimmydoe wrote:
| It seems not useful for 95% of users today, but later can be
| baked into the hardware Ive designed. so, good luck, I guess?
| sailfast wrote:
| Absolutely not. No. Hard pass.
|
| Why would I want yet another thing to tell me what I should be
| paying attention to?
| lqstuart wrote:
| "AI" is a $100B business, which idiot tech leaders who convinced
| themselves they were visionaries when interest rates were
| historically low have convinced themselves will save them from
| their stagnating growth.
|
| It's really cool. The coding tools are neat, they can somewhat
| reliably write pain in the ass boilerplate and only slightly fuck
| it up. I don't think they have a place beyond that in a
| professional setting (nor do I think junior engineers should be
| allowed to use them--my productivity has been destroyed by having
| to review their 2000 line opuses of trash code) but it's so cool
| to be able to spin up a hobby project in some language I don't
| know like Swift or React and get to a point where I can learn the
| ins and outs of the ecosystem. ChatGPT can explain stuff to me
| that I can't find experts to talk to about.
|
| That's the sum total of the product though, it's already complete
| and it does not need trillions of dollars of datacenter
| investment. But since NVIDIA is effectively taking all the fake
| hype money and taking it out of one pocket and putting it in
| another, maybe the whole Ponzi scheme will stay afloat for a
| while.
| smurfsmurf wrote:
| I've been saying this since I started using "AI" earlier this
| year: If you're a programmer, it's a glorified manual, and at
| that, it's wonderful. But beyond asking for cheat sheets on
| specific function signatures, it's pretty much useless.
| MisterBiggs wrote:
| Great way to sell some of those empty GPU cycles to consumers
| lexarflash8g wrote:
| I'm thinking OpenAI's strategy is to get users hooked on these
| new features to push ads on them.
|
| Hey, for that recipe you want to try, have you considered getting
| new knives or cooking ware? Found some good deals.
|
| For your travel trip, found a promo on a good hotel located here
| -- perfect walking distance for hiking and good restaraunts that
| have Thai food.
|
| Your running progress is great and you are hitting strides?
| Consider using this app to track calories and record your
| workouts -- special promo for 14 day trial .
| thoughtpalette wrote:
| Was thinking exactly the same. This correlates with having to
| another revenue stream and monetization strategy for OpenAi.
|
| In the end, it's almost always ads.
| bentt wrote:
| Even if they don't serve ads, think of the data they can share
| in aggregate. Think Facebook knows people? That's nothing.
| bgwalter wrote:
| Since every "AI" company frantically releases new applications,
| may I suggest OpenAI+ to copy the resounding success of Google+?
|
| Google+ is incidentally a great example of a gigantic money sink
| driven by optimistic hype.
| zelias wrote:
| Yet another category of startups killed by an incumbent
| duxup wrote:
| Hard to imagine this is anything useful beyond "give us all your
| data" in exchange for some awkward unprompted advice?
| IshKebab wrote:
| This could be _amazing_ for dealing with schools - I get
| information from my kids ' school through like 5 different
| channels: Tapestry, email, a newsletter, parents WhatsApp
| groups (x2), Arbor, etc. etc.
|
| And 90% of the information is not stuff I care about. The
| newsletter will be mostly "we've been learning about
| lighthouses this week" but they'll slip in "make sure your
| child is wearing wellies on Friday!" right at the end
| somewhere.
|
| If I could feed all that into AI and have it tell me about only
| the things that I actually need to know that would be
| _fantastic_. I 'd pay for that.
|
| Can't happen though because all those platforms are proprietary
| and don't have APIs or MCP to access them.
| duxup wrote:
| I feel you there, although it would also be complicated by
| those teachers who are just bad at technology and don't use
| those things well too.
|
| God bless them for teaching, but dang it someone get them to
| send emails and not emails with PDFs with the actual message
| and so on.
| psyclobe wrote:
| ChatGPT has given me wings to tackle projects I would've never
| had the impetus to tackle, finally I know how to use my
| oscilloscope and I am repairing vintage amps; fun times.
| boldlybold wrote:
| I agree - the ability to lower activation energy in a field
| you're interested in, but not yet an expert, feels like having
| superpowers.
| crorella wrote:
| same, I had a great idea (and a decently detailed plan) to
| improve an open source project, but never had the time and
| willpower to dive into the code, with codex it was one night to
| set it up and then slowing implementing every step of what I
| had originally planned.
| spike021 wrote:
| same for me but Claude. I've had an iphone game i've wanted to
| do for years but just couldn't spend the time consistently to
| learn everything to do it. but with Claude over the past three
| months i've been able to implement the game and even release it
| for fun.
| mihaaly wrote:
| May we look at it please? pure curiosity - also have similar
| thoughts you had. : )
| zelias wrote:
| Man, my startup does this but exclusively for enterprises, where
| it actually makes sense
| vbezhenar wrote:
| In the past, rich people had horses, while ordinary people
| walked. Today many ordinary people can afford a car. Can afford a
| tasty food every day. Can afford a sizeable living place. Can
| afford to wash two times a day with hot water. That's incredible
| life by medieval standards. Even kings didn't have everything we
| take for granted now.
|
| However some things are not available to us.
|
| One of those things is personal assistant. Today, rich people can
| offload their daily burdens to the personal assistants. That's a
| luxury service. I think, AI will bring us a future, where
| everyone will have access to the personal assistant,
| significantly reducing time spent on trivial not fun tasks. I
| think, this is great and I'm eager to live in that future. The
| direction of ChatGPT Pulse looks like that.
|
| Another things we don't have cheap access to are human servants.
| Obviously it'll not happen in the observable future, but humanoid
| robots might prove even better replacements.
| taf2 wrote:
| This has been surprisingly helpful for me. I've been using this
| for a little while and enjoyed the morning updates. It has
| actually for many days for me been a better hacker news, in that
| I was able to get insights into technical topics i've been
| focused on ranging from salesforce, npm, elasticsearch and
| ruby... it's even helped me remember to fix a few bugs.
| ric2z wrote:
| try clicking "Listen to article"
| mostMoralPoster wrote:
| Oh wow this is revolutionary!!
| adverbly wrote:
| There's the monitization angle!
|
| A new channel to push recommendations. Pay to have your content
| pushed straight to people as a personalized recommendation from a
| trusted source.
|
| Will be interesting if this works out...
| StarterPro wrote:
| Wasn't this already implemented via google and apple separately?
| TZubiri wrote:
| Breaking the request response loop and entering into async
| territory?
|
| Great!
|
| The examples used?
|
| Stupid. Why would I want AI generated buzzfeed tips style
| articles. I guess they want to turn chatgpt into yet another
| infinite scroller
| bentt wrote:
| I am pleading with you all. Don't give away your entire identity
| to this or any other company.
| reactordev wrote:
| Cracks are emerging. Having to remind users of your relevancy
| with daily meditations is the first sign that you need your
| engagement numbers up desperately.
| nycdatasci wrote:
| " This is the first step toward a more useful ChatGPT that
| proactively brings you..."
|
| Ads.
___________________________________________________________________
(page generated 2025-09-25 23:00 UTC)