[HN Gopher] AI's Dial-Up Era
___________________________________________________________________
AI's Dial-Up Era
Author : nowflux
Score : 442 points
Date : 2025-11-03 21:01 UTC (1 days ago)
(HTM) web link (www.wreflection.com)
(TXT) w3m dump (www.wreflection.com)
| arcticbull wrote:
| People tend to equate this to the railroad boom when saying that
| infrastructure spending will yield durable returns into the
| future no matter what.
|
| When the railroad bubble popped we had railroads. Metal and
| sticks, and probably more importantly, rights-of-way.
|
| If this is a bubble, and it pops, basically all the money will
| have been spent on Nvidia GPUs that depreciate to 0 over 4 years.
| All this GPU spending will need to be done again, every 4 years.
|
| Hopefully we at least get some nuclear power plants out of this.
| ares623 wrote:
| The recycling industry will boom. From what demand you ask?
| We'll find out soon enough.
| troupo wrote:
| The boom might not last long enough for western countries to
| pull heads out of their collective asses and ramp up production
| of nuclear plants.
|
| It takes China 5 years now, but they've been ramping up for
| more than 20 years.
| simonw wrote:
| Yeah, the short-lived GPU deprecation cycle does feel _very_
| relevant here.
|
| I'm still a fan of the railroad comparisons though for a few
| additional reasons:
|
| 1. The environmental impact of the railroad buildout was almost
| incomprehensibly large (though back in the 1800s people weren't
| really thinking about that at all.)
|
| 2. A lot of people lost their shirts investing in railroads!
| There were several bubbly crashes. A huge amount of money was
| thrown away.
|
| 3. There was plenty of wasted effort too. It was common for
| competing railroads to build out rails that served the same
| route within miles of each other. One of them might go bust and
| that infrastructure would be wasted.
| robinhoode wrote:
| Railroads need repair too? Not sure if it's every 4 years.
| Also, the trains I take to/from work are super slow because
| there is no money to upgrade.
|
| I think we may not upgrade every 4 years, but instead upgrade
| when the AI models are not meeting our needs AND we have the
| funding & political will to do the upgrade.
|
| Perhaps the singularity is just a sigmoid with the top of the
| curve being the level of capex the economy can withstand.
| arcticbull wrote:
| For what it's worth they cost a lot less than highways to
| maintain. Something like the 101 in the Bay Area costs about
| $40,000 per lane-mile per year, or about $240,000.
|
| Trains are closer to $50-100,000 per mile per year.
|
| If there's no money for the work it's a prioritization
| decision.
| rhubarbtree wrote:
| What percentage of data centre build costs are the GPUs vs
| power stations, water cooling plants, buildings, roads,
| network, racks, batteries, power systems, etc
| schwarzrules wrote:
| >> basically all the money will have been spent on Nvidia GPUs
| that depreciate to 0 over 4 years
|
| I agree the depreciation schedule always seems like a real risk
| to the whole financial assumptions these companies/investors
| make, but a question I've wondered: - Will there be an
| unexpected opportunity when all these "useless" GPUs are put
| out to pasture? It just seems like saying a factory will be
| useless because nobody wants to buy an IBM mainframe, but an
| innovative company can repurpose a non-zero part of that
| infrastructure for another use case.
| paxys wrote:
| There's a lot more to infrastructure spending than GPUs.
| Companies are building data centers, cooling systems, power
| plants (including nuclear), laying cables under oceans,
| launching satellites. Bubble or not, all of this will continue
| to be useful for decades in the future.
|
| Heck if nothing else all the new capacity being created today
| may translate to ~zero cost storage, CPU/GPU compute and
| networking available to startups in the future if the bubble
| bursts, and that itself may lead to a new software revolution.
| Just think of how many good ideas are held back today because
| deploying them at scale is too expensive.
| bryanlarsen wrote:
| > including nuclear
|
| Note that these are just power purchase agreements. It's not
| nothing, but it's a long ways away from building nuclear.
| amluto wrote:
| A bunch of the money is being spent on data centers and their
| associated cooling and power systems and on the power plants
| and infrastructure. Those should have much longer depreciation
| schedules.
| FridgeSeal wrote:
| Imagine the progress we could have made on climate change if
| this money had been funneled into that, instead of making some
| GPU manufacturers obscenely wealthy.
| bbddg wrote:
| Yeah it's infuriating to think about.
| leptons wrote:
| Throwing away the future for "AI" slop.
| fjdjcjejdjfje wrote:
| This is precisely why the AI bubble is so much worse than
| previous bubbles: the main capital asset that the bubble is
| acquiring is going to depreciate before the bubble's
| participants can ever turn a profit. Regardless of what AI's
| future capabilities are going to be, it's physically impossible
| for any of these companies to become profitable before the GPUs
| that they have _already purchased_ are either obsolete or burnt
| out from running under heavy load.
| vjvjvjvjghv wrote:
| I think the hardware infrastructure may be obsolete but at the
| moment we are still just beginning to figure out how to use AI.
| So the knowledge will be the important thing that's left after
| the bubble. The current infrastructure will probably be as
| obsolete as dial up infrastructure.
| runarberg wrote:
| The vast majority of the dot-com comparison that I personally see
| are economic, not technological. People (or at least the ones I
| see) are claiming that the bubble mechanics of e.g. circular
| trading and over-investments are similar to the dot-com bubble,
| not that the AI technology is somehow similar the internet (it
| obviously isn't). And to that extent we are in the year 1999 not
| 1995.
|
| When this article are claiming both sides of the debate, I
| believe only one of them are real (the ones hyping up the
| technology). While there are people like me who are pessimistic
| about the technology, we are not in any position of power, and
| our opinion on the matter is basically a side noise. I think a
| much more common (among people with any say in the future of this
| technology) is the believe that this technology is not yet at a
| point which warrants all this investment. There were people who
| said that about the internet in 1999, and they were proven 100%
| correct in the months that followed.
| vjvjvjvjghv wrote:
| Agreed. It would probably be better to keep improving AI before
| investing that much into infrastructure.
| mjr00 wrote:
| While I mostly agree with the article's premise (that AI will
| cause more software development to happen, not less) I disagree
| with two parts:
|
| 1. the opening premise comparing AI to dial-up internet;
| basically everyone knew the internet would be revolutionary long
| before 1995. Being able to talk to people halfway across the
| world on a BBS? Sending a message to your family on the other
| side of the country and them receiving it instantly? Yeah, it was
| pretty obvious this was transformative. The Krugman quote is an
| extreme, notable outlier, and it gets thrown out around literally
| _every_ new technology, from blockchain to VR headsets to 3DTVs,
| so just like, don 't use it please.
|
| 2. the closing thesis of
|
| > Consider the restaurant owner from earlier who uses AI to
| create custom inventory software that is useful only for them.
| They won't call themselves a software engineer.
|
| The idea that restaurant owners will be writing inventory
| software might make sense if the only challenge of creating
| custom inventory software, or any custom software, was writing
| the code... but it isn't. Software projects don't fail because
| people didn't write enough code.
| solomonb wrote:
| Before I got my first full time software engineering gig (I had
| worked part time briefly years prior) I was working full time
| as a carpenter. We were paying for an expensive online work
| order system. Having some previous experience writing software
| for music in college and a couple /brief/ LAMP stack freelance
| jobs after college I decided to try to write my own work order
| system. It took me like a month and it would never have never
| scaled, was really ugly, and had the absolute minimum number of
| features. I could never had accepted money from someone to use
| it but it did what we needed and we ran with it for several
| years after that.
|
| I was only able to do this because I had some prior programming
| experience but I would imagine that if AI coding tools get a
| bit better they would enable a larger cohort of people to build
| a personal tool like I did.
| Kiro wrote:
| I don't think his quote is that extreme and it was definitely
| not obvious to most people. A common thing you heard even
| around 95 was "I've tried internet but it was nothing special".
| alecbz wrote:
| > basically everyone knew the internet would be revolutionary
| long before 1995. Being able to talk to people halfway across
| the world on a BBS? Sending a message to your family on the
| other side of the country and them receiving it instantly?
| Yeah, it was pretty obvious this was transformative.
|
| That sounds pretty similar to long-distance phone calls? (which
| I'm sure was transformative in its own way, but not on nearly
| the same scale as the internet)
|
| Do we actually know how transformative the general population
| of 1995 thought the internet would or wouldn't be?
| xwolfi wrote:
| In 1995 in France we had the minitel already (like really a
| lot of people had one) and it was pretty incredible, but we
| were longing for something prettier, cheaper, snappier and
| more point to point (like the chat apps or emails).
|
| As soon as the internet arrived, a bit late for us (I'd say
| 1999 maybe) due to the minitel's "good enough" nature, it
| just became instantly obvious, everyone wanted it. The
| general population was raving mad to get an email address, I
| never heard anyone criticize the internet like I criticize
| the fake "AI" stuff now.
| visarga wrote:
| >> Consider the restaurant owner from earlier who uses AI to
| create custom inventory software that is useful only for them.
| They won't call themselves a software engineer.
|
| I have a suspicion this is LLM text, sounds corny. There are
| dozens open source solutions, just look one up.
| bena wrote:
| "But the fact that some geniuses were laughed at does not imply
| that all who are laughed at are geniuses. They laughed at
| Columbus, they laughed at Fulton, they laughed at the Wright
| brothers. But they also laughed at Bozo the Clown."
|
| Because some notable people dismissed things that wound up having
| profound effect on the world, it does not mean that everything
| dismissed will have a profound effect.
|
| We could just as easily be "peak Laserdisc" as "dial-up
| internet".
| rainsford wrote:
| I was happy to come into this thread and see I was not the
| first person for whom that quote came to mind. The dial-up
| Internet comparison implicitly argues for a particular outcome
| of current AI as a technology, but doesn't actually support
| that argument.
|
| There's another presumably unintended aspect of the comparison
| that seems worth considering. The Internet in 2025 is certainly
| vastly more successful and impactful than the Internet in the
| mid-90s. But dial-up itself as a technology for accessing the
| Internet was as much of a dead-end as Laserdisc was for
| watching movies at home.
|
| Whether or not AI has a similar trajectory as the Internet is
| separate from the question of whether the current
| implementation has an actual future. It seems reasonable to me
| that in the future we're enjoying the benefits of AI while
| laughing thinking back to the 2025 approach of just throwing
| more GPUs at the problem in the same way we look back now and
| get a chuckle out of the idea of "shotgun modems" as the
| future.
| bigwheels wrote:
| _> Benchmark today's AI boom using five gauges:
|
| > 1. Economic strain (investment as a share of GDP)
|
| > 2. Industry strain (capex to revenue ratios)
|
| > 3. Revenue growth trajectories (doubling time)
|
| > 4. Valuation heat (price-to-earnings multiples)
|
| > 5. Funding quality (the resilience of capital sources)
|
| > His analysis shows that AI remains in a demand-led boom rather
| than a bubble, but if two of the five gauges head into red, we
| will be in bubble territory._
|
| This seems like a more quantitative approach than most of "the
| sky is falling", "bubble time!", "circular money!" etc analyses
| commonly found on HN and in the news. Are there other worthwhile
| macro-economic indicators to look at?
|
| It's fascinating how challenging it is meaningfully compare
| current recent events to prior economic cycles such as the y2k
| tech bubble. It seems like it should be easy but AFAICT it barely
| even rhymes.
| rhubarbtree wrote:
| Yep.
|
| Stockmarket capitalisation as a percentage of GDP AKA the
| Buffett indicator.
|
| https://www.longtermtrends.net/market-cap-to-gdp-the-buffett...
|
| Good luck, folks.
| cb321 wrote:
| Besides your chart, another point along these lines is that
| the article cites Azhar claiming multiples are not in bubble
| territory while also mentioning Murati getting essentially
| infinite price multiple. Hmmmm...
| rybosworld wrote:
| How valuable is this metric considering that the biggest
| companies now draw a significant % of revenue from outside
| the U.S.?
|
| I'm sure there are other factors that make this metric not
| great for comparisons with other time periods, e.g.:
|
| - rates
|
| - accounting differences
| rhubarbtree wrote:
| I estimate you're talking 25% from overseas.
|
| If that bothers you, just multiply valuations by .75
|
| Doesn't make much difference even without doing the same
| adjust for previous eras.
|
| Buffett indicator survives this argument. He's a smart guy.
| dude250711 wrote:
| _> Consider the restaurant owner from earlier who uses AI to
| create custom inventory software that is useful only for them._
|
| That is the real dial-up thinking.
|
| Couldn't AI like _be_ their custom inventory software?
|
| Codex and Claud Code should not even exist.
| morkalork wrote:
| "That side of prime rib is totally in the walk-in, just keep
| looking. Trust me, bro"
| ToucanLoucan wrote:
| > Couldn't AI like be their custom inventory software?
|
| Absolutely not. It's inherently a software with a non-zero
| amount of probability in every operation. You'd have a similar
| experience asking an intern to remember your inventory.
|
| Like I enjoy Copilot as a research tool right but at the same
| time, ANYTHING that involves delving into our chat history is
| often wrong. I own three vehicles, for example, and it cannot
| for it's very life remember the year, make and model of them.
| Like they're there, but they're constantly getting switched
| around in the buffer. And once I started positing questions
| about friend's vehicles that only got worse.
| dude250711 wrote:
| But you should be able to say "remember this well" and AI
| would know it needs a reliable database instead of relying on
| its LLM cache or whatever. Could it not just spin up Postgres
| in some Codex Cloud like a human developer would? Not today
| but in a few years?
| handfuloflight wrote:
| It can do that today. I am doing that today.
| ToucanLoucan wrote:
| Why do I need to tell an AI to remember things?! How does
| AI consistently feel less intelligent than regular old
| boring software?!
| CamperBob2 wrote:
| Because you're using it wrong.
|
| Really. Tool use is a big deal for humans, and it's just
| as big a deal for machines.
| roommin wrote:
| Wouldn't an intelligent computer know to use tools? The
| core of the point being discussed seems to be why do you
| need to ask it to make it you inventory software when an
| intelligent system would know that when asked to build an
| inventory system setting up a database and logging all
| the information is need and ask agents to do that.
| CamperBob2 wrote:
| It's the same question that you might have asked in 1920.
| "This radio hardly works at all. Can't they do something
| about all the static? I don't see the big deal. This is
| just a scam to sell batteries and tubes."
| ToucanLoucan wrote:
| A radio isn't intelligent and isn't marketed as such. If
| you're going to sell software you call intelligent, I
| don't think I'm out of pocket for saying "this feels even
| dumber than regular software I use."
| roommin wrote:
| I don't find this comparison fitting.
| dg0 wrote:
| Nice article, but somewhat overstates how bad 1995 was meant to
| be.
|
| A single image generally took nothing like a minute. Most people
| had moved to 28.8K modems that would deliver an acceptable large
| image in 10-20 seconds. Mind you, the full-screen resolution was
| typically 800x600 and color was an 8-bit palette... so much less
| data to move.
|
| Moreover, thanks to "progressive jpeg", you got to see the full
| picture in blocky form within a second or two.
|
| And of course, with pages was less busy and tracking cookies
| still a thing of the future, you could get enough of a news site
| up to start reading in less time that it can take today.
|
| One final irk is that it's little overdone to claim that "For the
| first time in history, you can exchange letters with someone
| across the world in seconds". Telex had been around for decades,
| and faxes, taking 10-20 seconds per page were already
| commonplace.
| saltysalt wrote:
| Not sure the dial-up analogy fits, instead I tend to think we are
| in the mainframe period of AI, with large centralised computing
| models that are so big and expensive to host, only a few
| corporations can afford to do so. We rent a computing timeshare
| from them (tokens = punch cards).
|
| I look forward to the "personal computing" period, with small
| models distributed everywhere...
| runarberg wrote:
| I actually think we are much closer to the sneaker era of
| shoes, or the monorail era of public transit.
| onlyrealcuzzo wrote:
| Don't we already have small models highly distributed?
| saltysalt wrote:
| We do, but the vast majority of users interact with
| centralised models from Open AI, Google Gemini, Grok...
| onlyrealcuzzo wrote:
| I'm not sure we can look forward to self-hosted models ever
| being mainstream.
|
| Like 50% of internet users are already interacting with one
| of these daily.
|
| You usually only change your habit when something is
| substantially better.
|
| I don't know how free versions are going to be smaller, run
| on commodity hardware, take up trivial space and ram etc,
| AND be substantially better
| saltysalt wrote:
| You make a fair point, I'm just hoping this will happen,
| but not confident either to be frank.
| ryanianian wrote:
| The "enshittification" hasn't happened yet. They'll add
| ads and other gross stuff to the free or cheap tiers.
| Some will continue to use it, but there will be an
| opportunity for self-hosted models to emerge.
| oceanplexian wrote:
| > I'm not sure we can look forward to self-hosted models
| ever being mainstream.
|
| If you are using an Apple product chances are you are
| already using self-hosted models for things like writing
| tools and don't even know it.
| o11c wrote:
| > Like 50% of internet users are already interacting with
| one of these daily. You usually only change your habit
| when something is substantially better.
|
| No, you usually only change your habit when the tools you
| are already using are changed without consulting you, and
| the statistics are then used to lie.
| raincole wrote:
| Because small models are just not that good.
| positron26 wrote:
| The vast majority won't switch until there's a 10x use
| case. We know they are coming. Why bother hopping?
| paxys wrote:
| Why would companies sell you the golden goose when they can
| instead sell you an egg every day?
| saltysalt wrote:
| Exactly! It's a rent-seeking model.
| echelon wrote:
| > I look forward to the "personal computing" period, with
| small models distributed everywhere...
|
| Like the web, which worked out great?
|
| Our Internet is largely centralized platforms. Built on
| technology controlled by trillion dollar titans.
|
| Google somehow got the lion share of browser usage and is
| now dictating the direction of web tech, including the
| removal of adblock. The URL bar defaults to Google search,
| where the top results are paid ads.
|
| Your typical everyday person uses their default, locked
| down iPhone or Android to consume Google or Apple platform
| products. They then communicate with their friends over
| Meta platforms, Reddit, or Discord.
|
| The decentralized web could never outrun money. It's
| difficult to out-engineer hundreds of thousands of the most
| talented, most highly paid engineers that are working to
| create these silos.
| saltysalt wrote:
| I agree man, it's depressing.
| NemoNobody wrote:
| Ok, so Brave Browser exists - if you download, you will
| see 0 ads on the internet, I've never really seen ads on
| the internet - even in the before brave times.
|
| Fr tho, no ads - I'm not making money off them, I've got
| no invite code for you, I'm a human - I just don't get
| it. I've probably told 500 people about Brave, I don't
| know any that ever tried it.
|
| I don't ever know what to say. You're not wrong, as long
| as you never try to do something else.
| echelon wrote:
| If everyone used Brave, Google wouldn't be a multi-
| trillion dollar company pulling revenues that dwarf many
| countries.
|
| Or rather, they'd block Brave.
| acheron wrote:
| Brave is just a rebranded Chrome. By using it you're
| still endorsing Google's control of the web.
| makingstuffs wrote:
| I was gonna say this. If Google decides to stop
| developing chromium then Brave is left with very few
| choices.
|
| As someone who has been using brace since it was first
| announced and very tightly coupled to the BAT crypto
| token I must say it is much less effective nowadays.
|
| I often still see a load of ads and also regularly have
| to turn off the shields for some sites.
| mulmen wrote:
| Your margin is my opportunity. The more expensive centralized
| models get the easier it is for distributed models to
| compete.
| codegeek wrote:
| You could say the same thing about Computers when they were
| mostly mainframe. I am sure someone will figure out how to
| make it commoditized just like personal computers and
| internet.
| fph wrote:
| An interesting remark: in the 1950s-1970s, mainframes were
| typically rented rather than sold.
| vjvjvjvjghv wrote:
| It looks to me like the personal computer area is over.
| Everything is in the cloud and accessed through terminals
| like phones and tablets.
| freedomben wrote:
| And notably, those phones and tablets are intentionally
| hobbled by the device owners (Apple, Google) who do
| everything they can to ensure they can't be treated like
| personal computing devices. Short of regulatory
| intervention, I don't see this trend changing anytime
| soon. We're going full on in the direction of more locked
| down now that Google is tightening the screws on Android.
| worldsayshi wrote:
| Well if there's at least one competitor selling golden geese
| to consumers the rest have to adapt.
|
| Assuming consumers even bother to set up a coop in their
| living room...
| kakapo5672 wrote:
| Because companies are not some monolith, all doing identical
| things forever. If someone sees a new angle to make money,
| they'll start doing it.
|
| Data General and Unisys did not create PCs - small disrupters
| did that. These startups were happy to sell eggs.
| otterley wrote:
| They didn't create them, but PC startups like Apple and
| Commodore only made inroads into the home -- a relatively
| narrow market compared to business. It took IBM to
| legitimize PCs as business tools.
| DevKoala wrote:
| Because someone else will sell it to you if they dont.
| JumpCrisscross wrote:
| > _Why would companies sell you the golden goose when they
| can instead sell you an egg every day?_
|
| Because someone else can sell the goose and take your market.
|
| Apple is best aligned to be the disruptor. But I wouldn't
| underestimate the Chinese government dumping top-tier open-
| source models on the internet to take our tech companies down
| a notch or ten.
| gizajob wrote:
| Putting a few boots in Taiwan would also make for a
| profitable short. Profitable to the tune of several
| trillion dollars. Xi must be getting tempted.
| CuriouslyC wrote:
| It's a lot more complicated than that. They need to be
| able to take the island very quickly with a decapitation
| strike, while also keeping TSMC from being sabotaged or
| destroyed, then they need to be able to weather a long
| western economic embargo until they can "break the siege"
| with demand for what they control along with minor good
| faith concessions.
|
| It's very risky play, and if it doesn't work it leaves
| China in a much worse place than before, so ideally you
| don't make the play unless you're already facing some big
| downside, sort of as a "hail Mary" move. At this point
| I'm sure they're assuming Trump is glad handing them
| while preparing for military action, they might even view
| invasion of Taiwan as defensive if they think military
| action could be imminent anyhow.
| JumpCrisscross wrote:
| > _then they need to be able to weather a long western
| economic embargo until they can "break the siege" with
| demand for what they control along with minor good faith
| concessions_
|
| And you know we'd be potting their transport ships, _et
| cetera_ , from a distance the whole time, all to terrific
| fanfare. The Taiwan Strait would become the new training
| ground for naval drones, with the targets being almost
| exclusively Chinese.
| CuriouslyC wrote:
| I worked with the Taiwanese Military, that's their dream
| scenario but the reality is they're scared shitless that
| the Chinese will decapitate them with massive air
| superiority. Drones don't mean shit without C2.
| JumpCrisscross wrote:
| > _they 're scared shitless that the Chinese will
| decapitate them with massive air superiority_
|
| Taiwan fields strong air defenses backed up by American
| long-range fortifications.
|
| The threat is covert decapitation. A series of terrorist
| attacks carried out to sow confusion while the attack
| launches.
|
| Nevertheless, unless China pulls off a Kabul, they'd
| still be subject to constant cross-Strait harassment.
| CuriouslyC wrote:
| China has between 5:1 and 10:1 advantage depending on
| asset class. If not already on standby, US interdiction
| is ~48 hours. For sure China is going to blast on all
| fronts, so cyber and grid interruptions combined with
| shock and awe are definitely gonna be a thing. It's not a
| great setup for Taiwan.
| gizajob wrote:
| Destroying TSMC or knowing it would be sabotaged would
| pretty much be the point of the operation. Would take 48
| hours and they could be out of there again and say "ooops
| sorry" at the UN.
| CuriouslyC wrote:
| Hard disagree. They need chips bad, and it's the US
| defense position that TSMC be destroyed if possible in
| the event of successful Chinese invasion. They also care
| about reunification on principle, and an attack like that
| without letting them force "One China" on the Taiwanese
| in the aftermath would just move them farther from that
| goal.
| paxys wrote:
| By that logic none of us should be paying monthly
| subscriptions for anything because obviously someone would
| disrupt that pricing model and take business away from all
| the tech companies who are charging it? Especially since
| personal computers and mobile devices get more and more
| powerful and capable with every passing year. Yet
| subscriptions also get more prevalent every year.
|
| If Apple does finally come up with a fully on-device AI
| model that is actually useful, what makes you think they
| won't gate it behind a $20/mo subscription like they do for
| everything else?
| JumpCrisscross wrote:
| > _By that logic none of us should be paying monthly
| subscriptions for anything because obviously someone
| would disrupt that pricing model and take business away
| from all the tech companies who are charging it?_
|
| _Non sequitur._
|
| If a market is being ripped off by subscription, there is
| opportunity in selling the asset. Vice versa: if the
| asset sellers are ripping off the market, there is
| opportunity to turn it into a subscription. Business
| models tend to oscillate between these two for a variety
| of reasons. Nothing there suggets one mode is infinitely
| yielding.
|
| > _If Apple does finally come up with a fully on-device
| AI model that is actually useful, what makes you think
| they won 't gate it behind a $20/mo subscription like
| they do for everything else?_
|
| If they can, someone else can, too. They can make plenty
| of money selling it straight.
| Draiken wrote:
| > If a market is being ripped off by subscription, there
| is opportunity in selling the asset.
|
| Only in theory. Nothing beats getting paid forever.
|
| > Business models tend to oscillate between these two for
| a variety of reasons
|
| They do? AFAICT everything devolves into
| subscriptions/rent since it maximizes profit. It's the
| only logical outcome.
|
| > If they can, someone else can, too.
|
| And that's why companies love those monopolies. So, no...
| other's can't straight up compete against a monopoly.
| phinnaeus wrote:
| What on-device app does Apple charge a subscription for?
| cloverich wrote:
| Because they need to displace open AI users, or open AI
| will steer their trajectory towards Apple at some point.
| likium wrote:
| Unfortunately, most people just want eggs, not the burden
| of actually owning the goose.
| troupo wrote:
| > Apple is best aligned to be the disruptor.
|
| It's this disruptor Apple in the room with us now?
|
| Apple's second biggest money source is services. You know,
| subscriptions. And that source keeps growing:
| https://sixcolors.com/post/2025/10/charts-apple-caps-off-
| bes...
|
| It's also that same Apple that fights tooth and nail every
| single attempt to let people have the goose or even the
| promise of a goose. E.g. by saying that it's entitled to a
| cut even if a transaction didn't happen through Apple.
| eloisant wrote:
| Sure, the company that launched iTunes and killed physical
| media, then released a phone where you can't install apps
| ("the web is the apps") will be the disruptor to bring back
| local computing to users...
| positron26 wrote:
| When the consumer decides to discover my site and fund
| federated and P2P infrastructure, they can have a seat at the
| table.
| anjel wrote:
| Selling fertile geese was a winning and proven business biz
| model for a very long time.
|
| Selling eggs is better how?
| cyanydeez wrote:
| I think we are in the dotcom boom era where investment is
| circular and the cash investments all depend on the idea that
| growth is infinite.
|
| Just a bunch of billionaires jockeying for not being poor.
| gowld wrote:
| We are also in the mainframe period of computing, with large
| centralised cloud services.
| sixtyj wrote:
| Dial-up + mainframe. Mainframe from POV as silos, dial-up
| internet as the speed we have now when looking back to 2025 in
| 2035.
| chemotaxis wrote:
| > I look forward to the "personal computing" period, with small
| models distributed everywhere...
|
| One could argue that this period was just a brief fluke.
| Personal computers really took off only in the 1990s, web 2.0
| happened in the mid-2000s. Now, for the average person, 95%+ of
| screen time boils down to using the computer as a dumb terminal
| to access centralized services "in the cloud".
| btown wrote:
| Even the most popular games (with few exceptions) present as
| relatively dumb terminals that need constant connectivity to
| sync every activity to a mainframe - not necessarily because
| it's an MMO or multiplayer game, but because it's the
| industry standard way to ensure fairness. And by fairness, of
| course, I mean the optimization of enforcing "grindiness" as
| a mechanism to sell lootboxes and premium subscriptions.
|
| And AI just further _normalizes_ the need for connectivity;
| cloud models are likely to improve faster than local models,
| for both technical and business reasons. They 've got the
| premium-subscriptions model down. I shudder to think what
| happens when OpenAI begins hiring/subsuming-the-knowledge-of
| "revenue optimization analysts" from the AAA gaming world as
| a way to boost revenue.
|
| But hey, at least you still need humans, at some level, if
| your paperclip optimizer is told to find ways to get humans
| to spend money on "a sense of pride and accomplishment." [0]
|
| We do not live in a utopia.
|
| [0] https://www.guinnessworldrecords.com/world-
| records/503152-mo... - https://www.reddit.com/r/StarWarsBattl
| efront/comments/7cff0b...
| throw23920 wrote:
| I imagine there are plenty of indie single-player games
| that work just fine offline. You lose cloud saves and
| achievements, but everything else still works.
| pksebben wrote:
| That 'average' is doing a lot of work to obfuscate the
| landscape. Open source continues to grow (indicating a robust
| ecosystem of individuals who use their computers for local
| work) and more importantly, the 'average' looks like it does
| not necessarily due to a reduction in local use, but to an
| explosion of users that did not previously exist (mobile
| first, SAAS customers, etc.)
|
| The thing we do need to be careful about is regulatory
| capture. We could very well end up with nothing but
| monolithic centralized systems simply because it's made
| illegal to distribute, use, and share open models. They
| hinted quite strongly that they wanted to do this with
| deepseek.
|
| There may even be a case to be made that at some point in the
| future, small local models will outperform monoliths - if
| distributed training becomes cheap enough, or if we find an
| alternative to backprop that allows models to learn as they
| infer (like a more developed forward-forward or something
| like it), we may see models that do better simply _because_
| they aren 't a large centralized organism behind a walled
| garden. I'll grant that this is a fairly polyanna take and
| represents the best possible outcome but it's not
| outlandishly fantastic - and there is good reason to believe
| that any system based on a robust decentralized architecture
| would be more resilient to problems like platform
| enshittification and overdeveloped censorship.
|
| At the end of the day, it's not important what the 'average'
| user is doing, so long as there are enough non-average users
| pushing the ball forward on the important stuff.
| idiotsecant wrote:
| I can't imagine a universe where a small mind with limited
| computing resources has an advantage against a datacenter
| mind, no matter the architecture.
| bee_rider wrote:
| The small mind could have an advantage if it is closer or
| more trustworthy to users.
|
| It only has to be good enough to do what we want. In the
| extreme, maybe inference becomes cheap enough that we ask
| "why do I have to wake up the laptop's antenna?"
| galaxyLogic wrote:
| I would like to have a personal AI agent which basically
| has a copy of my knowledge, a reflection of me, so it
| could help me mupltiply my mind.
| heavyset_go wrote:
| I don't want to send sensitive information to a data
| center, I don't want it to leave my machine/network/what
| have you. Local models can help in that department.
|
| You could say the same about all self-hosted software,
| teams with billions of dollars to produce and host SaaS
| will always have an advantage over smaller, local
| operations.
| gizajob wrote:
| The only difference is latency.
| bigfatkitten wrote:
| Universes like ours where the datacentre mind is
| completely untrustworthy.
| hakfoo wrote:
| Abundant resources could enable bad designs. I could in
| particular see a lot of commercial drive for huge models
| that can solve a bazillion different use cases, but
| aren't efficient for any of them.
|
| There might be also local/global bias strategies. A tiny
| local model trained on your specific code/document base
| may be better aligned to match your specific needs than a
| galaxy scale model. If it only knows about one "User"
| class, the one in your codebase, it might be less prone
| to borrowing irrelevant ideas from fifty other systems.
| pksebben wrote:
| The advantage it might have won't be in the form of "more
| power", it would be in the form of "not burdened by
| sponsored content / training or censorship of any kind,
| and focused on the use-cases most relevant to the
| individual end user."
|
| We're already very, very close to "smart enough for most
| stuff". We just need that to also be "tuned for our
| specific wants and needs".
| TheOtherHobbes wrote:
| We already have monolithic centralised systems.
|
| Most open source development happens on GitHub.
|
| You'd think non-average developers would have noticed their
| code is now hosted by Microsoft, not the FSF. But perhaps
| not.
|
| The AI end game is likely some kind of post-Cambrian, post-
| capitalist soup of evolving distributed compute.
|
| But at the moment there's no conceivable way for local
| and/or distributed systems to have better performance and
| more intelligence.
|
| Local computing has latency, bandwidth, and speed/memory
| limits, and general distributed computing isn't even a
| thing.
| jayd16 wrote:
| I don't know, I think you're conflating content streaming
| with central compute.
|
| Also, is percentage of screentime the relevant metric? We
| moved TV consumption to the PC, does that take away from PCs?
|
| Many apps moved to the web but that's basically just streamed
| code to be run in a local VM. Is that a dumb terminal? It's
| not exactly local compute independent...
| kamaal wrote:
| Nah, your parent comment has a valid point.
|
| Nearly entirety of the use cases of computers today don't
| involve running things on a 'personal computer' in any way.
|
| In fact these days, every one kind of agrees as little as
| hosting a spreadsheet on your computer is a bad idea.
| Cloud, where everything is backed up is the way to go.
| jayd16 wrote:
| But again, that's conflating web connected or even web
| required with mainframe compute and it's just not the
| same.
|
| PC was never 'no web'. No one actually 'counted every
| screw in their garage' as the PC killer app. It was
| always the web.
| kamaal wrote:
| In time Mainframes of this age will make a come back.
|
| This whole idea that you can connect lots of cheap low
| capacity boxes and drive down compute costs is already
| going away.
|
| In time people will go back to thinking compute as a
| variable of time taken to finish processing. That's the
| paradigm in the cloud compute world- you are billed for
| the TIME you use the box. Eventually people will just
| want to use something bigger that gets things done
| faster, hence you don't have to rent them for long.
| galaxyLogic wrote:
| It's also interesting that computing capacity is no
| longer discussed as instructions per second, but as Giga
| Watts.
| eru wrote:
| You know that the personal computer predates the web by
| quite a few years?
| rambambram wrote:
| This. Although briefly, there was at least a couple of
| years of using pc's without an internet connection. It's
| unthinkable now. And even back then, when you blinked
| with your eyes this time period was over.
| jayd16 wrote:
| Sure, I was too hyperbolic. I simply meant connecting to
| the web didn't make it not a PC.
|
| The web really pushed adoption, much more than a person
| computation machine. It was the main use case for most
| folks.
| morsch wrote:
| One of the actual killer apps was gaming. Which still
| "happens" mostly on the client, today, even for networked
| games.
| jhanschoo wrote:
| Yet the most popular games are online-only and even more
| have their installation base's copies of the game managed
| by an online-first DRM.
| morsch wrote:
| That's true, but beside the point: even online only games
| or those gated by online DRM are not streamed or resemble
| a thin client architecture.
|
| That exists, too, with GeForce Now etc, which is why I
| said mostly.
| jayd16 wrote:
| This is just factually inaccurate.
| bandrami wrote:
| Umm... I had a PC a decade before the web was invented,
| and I didn't even use the web for like another 5 years
| after it went public ("it's an interesting bit of tech
| but it will obviously never replace gopher...")
|
| The killer apps in the 80s were spreadsheets and desktop
| publishing.
| eru wrote:
| > I don't know, I think you're conflating content streaming
| with central compute.
|
| Would you classify eg gmail as 'content streaming'?
| mikepurvis wrote:
| But gmail is also a relatively complicated app, much of
| which runs locally on the client device.
| MobiusHorizons wrote:
| It is true that browsers do much more computation than
| "dumb" terminals, but there are still non-trivial
| parallels. Terminals do contain a processor and memory in
| order to handle settings menus, handle keyboard input and
| convert incoming sequences into a character array that is
| then displayed on the screen. A terminal is mostly
| useless without something attached to the other side, but
| not _completely_ useless. You can browse the menus,
| enable local echo, and use device as something like a
| scratchpad. I once drew up a schematic as ascii art this
| way. The contents are ephemeral and you have to take a
| photo of the screen or something in order to retain the
| data.
|
| Web browsers aren't quite that useless with no internet
| connection, some sites do offer offline capabilities (for
| example gmail). but even then, the vast majority of
| offline experiences exist to tide the user over until
| network can be re-established, instead of truly offering
| something useful to do locally. Probably the only
| mainstream counter-examples would be games.
| WalterSear wrote:
| It's still a SAAS, with components that couldn't be
| replicated client-side, such as AI.
| galaxyLogic wrote:
| Right. But does it matter whether computation happens on
| the client or server? Probabaly on both in the end.
|
| But yes I am looking forward to having my own LMS on my
| PC which only I have access to.
| fragmede wrote:
| Google's own Gemma models are runnable locally on a Pixel
| 9 Max so some lev of AI is replicatable client side. As
| far as Gmail running locally, it wouldn't be impossible
| for Gmail to be locally hosted and hit a local cache
| which syncs with a server only periodically over
| IMAP/JMAP/whatever if Google actually wanted to do it.
| jayd16 wrote:
| Well, app code is streamed, content is streamed. The app
| code is run locally. Content is pulled periodically.
|
| The mail server is the mail server even for Outlook.
|
| Outlook gives you a way to look through email offline.
| Gmail apps and even Gmail in Chrome have an offline mode
| that let you look through email.
|
| It's not easy to call it fully offline, nor a dumb
| terminal.
| JumpCrisscross wrote:
| > _using the computer as a dumb terminal to access
| centralized services "in the cloud"_
|
| Our personal devices are _far_ from thin clients.
| freedomben wrote:
| Depends on the app, and the personal device. Mobile devices
| are increasingly thin clients. Of course hardware-wise they
| are fully capable personal computers, but ridiculous
| software-imposed limitations make that increasingly
| difficult.
| Cheer2171 wrote:
| But that is what they are mostly used for.
| TheOtherHobbes wrote:
| On phones, most of the compute is used to render media
| files and games, and make pretty animated UIs.
|
| The text content of a weather app is trivial compared to
| the UI.
|
| Same with many web pages.
|
| Desktop apps use local compute, but that's more a
| limitation of latency and network bandwidth than any
| fundamental need to keep things local.
|
| Security and privacy also matter to some people. But not
| to most.
| bigyabai wrote:
| Speak for yourself. Many people don't daily-drive anything
| more advanced than an iPad.
| eru wrote:
| IPads are incredibly advanced. Though I guess you mean
| they don't use anything that requires more sophistication
| from the user (or something like that)?
| boomlinde wrote:
| The Ipad is not a thin client, is it?
| troupo wrote:
| It is, for the vast majority of users.
|
| Turn off internet on they iPad and see how many apps that
| people use still work.
| boomlinde wrote:
| I'm not questioning whether the Ipad can be used as a
| client in some capacity, or whether people tend to use it
| as a client. I question whether the Ipad is a _thin_
| client. The answer to that question doesn 't lie in how
| many applications require an internet connection, but in
| how many applications require local computational
| resources.
|
| The Ipad is a high performance computer, not just because
| Apple think that's fun, but out of necessity given its
| ambition: the applications people use on it require local
| storage and rather heavy local computation. The web
| browser standards if nothing else have pretty much
| guaranteed that the age of thin clients is over: a client
| needs to supply a significant amount of computational
| resources and storage to use the web generally. Not even
| Chromebooks will practically be anything less than rich
| clients.
|
| Going back to the original topic (and source of the
| analogy), IOS hosts an on-device large language model.
| troupo wrote:
| As with everything, the lines are a bit blurred these
| days. We may need a new term for these devices. But
| despite all the compute and storage and on-device models
| these supercomputers are barely a step above thin
| clients.
| mlrtime wrote:
| No, its a poor anology, I'm old enough to have used a
| Wyse terminal. That's what I think of when I hear dumb
| terminal. It was dumb.
|
| Maybe a PC without a hard drive (PXE the OS), but if it
| has storage and can install software, its not dumb.
| troupo wrote:
| We may want a new term for our devices :)
| https://news.ycombinator.com/item?id=45808654
| immutology wrote:
| "Thin" can be interpreted as relative, no?
|
| I think it depends on if you see the browser for content or
| as a runtime environment.
|
| Maybe it depends on the application architecture...? I.e.,
| a compute-heavy WASM SPA at one end vs a server-rendered
| website.
|
| Or is it an objective measure?
| bandrami wrote:
| I mean, Chromebooks really aren't very far at all from thin
| clients. But even my monster ROG laptop when it's not
| gaming is mostly displaying the results of computation that
| happened elsewhere
| positron26 wrote:
| Makes me want to unplug and go back to offline social media.
| That's a joke. The dominant effect was networked applications
| getting developed, enabling community, not a shift back to
| client terminals.
| grumbel wrote:
| Once up on a time social media was called Usenet and worked
| offline in a dedicated client with a standard protocol. You
| only went online to download and send messages, but could
| then go offline and read them in an app of your choice.
|
| Web2.0 discarded the protocol approach and turned your
| computer into a thin client that does little more than
| render webapps that require you to be permanently online.
| cesarb wrote:
| > Once up on a time social media was called Usenet and
| worked offline in a dedicated client with a standard
| protocol.
|
| There was also FidoNet with offline message readers.
| positron26 wrote:
| > called Usenet and worked offline
|
| People must have been pretty smart back then. They had to
| know to hang up the phone to check for new messages.
| seemaze wrote:
| I think that speaks more to the fact that software ate the
| world, than locality of compute. It's a breadth first, depth
| last game.
| wolpoli wrote:
| The personal computing era happened partly because, while
| there were demands for computing, users' connectivity to the
| internet were poor or limited and so they couldn't just
| connect to the mainframe. We now have high speed internet
| access everywhere - I don't know what would drive the
| equivalent of the era of personal computing this time.
| almostnormal wrote:
| Centralized only became mainstream when everything started
| to be offered "for free". When it was buy or pay
| recurrently more often the choice was to buy.
| troupo wrote:
| There are no longer options to buy. Everything is a
| subscription
| rightbyte wrote:
| Between mobilephone service including SMS and an ISP
| service which usually include mail I don't see the need
| for any hosted service.
|
| There are FOSS alternatives for about everything for
| hobbyist and consumer use.
| api wrote:
| There are no FOSS alternatives for consumer use unless
| the consumer is an IT pro or a developer. Regular people
| can't use most open source software without help. Some of
| it, like Linux desktop stuff, has a nice enough UI that
| they can use it casually but they can't install or
| configure or fix it.
|
| Making software that is polished and reliable and
| automatic enough that non computer people can use it is a
| lot harder than just making software. I'd say it's
| usually many times harder.
| rightbyte wrote:
| I don't think that is a software issue but a social issue
| nowadays. FOSS alternatives have become quite OK in my
| opinion.
|
| If computers came with Debian, Firefox and Libre Office
| preinstalled instead of only W11, Edge and with some
| Office 365 trail, the relative difficulty would be gone I
| think.
|
| Same thing with most IT departments only dealing with
| Windows in professional settings. If you even are allowed
| to use something different you are on your own.
| torginus wrote:
| I think people have seen enough of this 'free' business
| model to know the things being sold for free are in fact,
| not.
| mlrtime wrote:
| Some people, but a majority see it as free. Go to your
| local town center and randomly poll people how much they
| pay for email or google search, 99% will say it is free
| and stop there.
| ruszki wrote:
| > We now have high speed internet access everywhere
|
| As I travel a ton, I can confidently tell you, that this is
| still not true at all, and I'm kinda disappointed that the
| general rule of optimizing for bad reception died.
| ChadNauseam wrote:
| I work on a local-first app for fun and someone told me I
| was simply creating problems for myself and I could just
| be using a server. But I'm in the same boat as you. I
| regularly don't have good internet and I'm always
| surprised when people act like an internet connection is
| a safe assumption. Even every day I go up and down an
| elevator where I have no internet, I travel regularly, I
| go to concerts and music festivals, and so on.
| sampullman wrote:
| I don't even travel that much, and still have trouble.
| Tethering at the local library or coffee shops is hit or
| miss, everything slows down during storms, etc.
| BoxOfRain wrote:
| > everything slows down during storms
|
| One problem I've found in my current house is that the
| connection becomes flakier in heavy rain, presumably due
| to poor connections between the cabinet and houses. I
| live in Cardiff which for those unaware is one of
| Britain's rainiest cities. Fun times.
| mlrtime wrote:
| Not true because of cost or access? If you consider
| starlink high speed, it truly is available everywhere.
| virgilp wrote:
| Because of many reasons. It's not practical to have a
| Starlink antenna with you everywhere. And then yes, cost
| is a significant factor too - even in the dialup era
| satellite internet connection was a thing that existed
| "everywhere", in theory....
| ruszki wrote:
| Access. You cannot use Starlink on a train, flight,
| inside buildings, etc. Starlink is also not available
| everywhere: https://starlink.com/map. Also, it's not
| feasible to bring that with me a lot of time, for example
| on my backpack trips; it's simply too large.
| BoxOfRain wrote:
| Yeah British trains are often absolutely awful for this,
| I started putting music on my phone locally to deal with
| the abysmal coverage.
| bartread wrote:
| > the general rule of optimizing for bad reception died.
|
| Yep, and people will look at you like you have two heads
| when you suggest that perhaps we should take this into
| account, because it adds both cost and complexity.
|
| But I am sick to the gills of using software - be that on
| my laptop or my phone - that craps out constantly when
| I'm on the train, or in one of the many mobile reception
| black spots in the areas where I live and work, or
| because my rural broadband has decided to temporarily
| give up, because the software wasn't built with
| unreliable connections in mind.
|
| It's not that bleeding difficult to build an app that
| stores state locally and can sync with a remote service
| when connectivity is restored, but companies don't want
| to make the effort because it's perceived to be a niche
| issue that only affects a small number of people a small
| proportion of the time and therefore not worth the extra
| effort and complexity.
|
| Whereas I'd argue that it affects a decent proportion of
| people on at least a semi-regular basis so is probably
| worth the investment.
| asa400 wrote:
| We ignore the fallacies of distributed computing at our
| peril: https://en.wikipedia.org/wiki/Fallacies_of_distrib
| uted_compu...
| LogicFailsMe wrote:
| Moving services to the cloud unfortunately relieves a lot
| of the complexity of software development with respect to
| the menagerie of possible hardware environments.
|
| it of course leads to a crappy user experience if they
| don't optimize for low bandwidth, but they don't seem to
| care about that, have you ever checked out how useless
| your algorithmic Facebook feed is now? Tons of bandwidth,
| very little information.
|
| It seems like their measure is time on their website
| equals money in their pocket and baffling you with BS is
| a great way to achieve that until you never visit again
| in disgust and frustration.
| wtallis wrote:
| I don't think the "menagerie of possible hardware
| environments" excuse holds much water these days. Even
| web apps still need to accommodate various screen sizes
| and resolutions and touch vs mouse input.
|
| Native apps need to deal with the variety in _software_
| environments (not to say that web apps are entirely
| insulated from this), across several mobile and desktop
| operating systems. In the face of _that_ complexity,
| having to compile for both x86-64 and arm64 is at most a
| minor nuisance.
| LogicFailsMe wrote:
| Have you ever distributed an app on the PC to more than a
| million people? It might change your view. Browser issues
| are a different argument and I agree with you 100% there.
| I really wish people would pull back and hold everyone to
| consistent standards but they won't.
| visarga wrote:
| It's always a small crisis what app/book to install on my
| phone to give me 5-8 hours of reading while on a plane. I
| found one - Newsify, combine it with YT caching.
| donkeybeer wrote:
| Usually it reduces not adds complexity. Simpler pages
| without hundred different js frameworks are faster.
| Razengan wrote:
| > _I don 't know what would drive the equivalent of the era
| of personal computing this time._
|
| Space.
|
| You don't want to wait 3-22 minutes for a ping from Mars.
| AlecSchueler wrote:
| I'm not sure if the handful of people in space stations
| are a big enough market to drive such changes.
| threetonesun wrote:
| Privacy. I absolutely will not ever open my personal files
| to an LLM over the web, and even with my mid-tier M4
| Macbook I'm close to a point where I don't have to. I
| wonder how much the cat is out of the back for private
| companies in this regard. I don't believe the AI companies
| founded on stealing IP have stopped.
| AlecSchueler wrote:
| Privacy is a niche concern sadly.
| jimbokun wrote:
| I believe Apple has made a significant number of iPhone
| sales due to a perception of better privacy than Android.
| netdevphoenix wrote:
| > We now have high speed internet access everywhere
|
| This is such a HN comment illustrating how little your
| average HN knows of the world beyond their tech bubble.
| Internet everywhere, you might have something of a point.
| But "high speed internet access everywhere" sounds like "I
| haven't travelled much in my life".
| unethical_ban wrote:
| Privacy, reliable access when not connected to the web, the
| principal of decentralizing for some. Less supply chain
| risk for private enterprise.
| torginus wrote:
| I dislike the view of individuals as passive sufferers of the
| preferences of big corporations.
|
| You can and people do self-host stuff that big tech wants
| pushed into the cloud.
|
| You can have a NAS, a private media player, Home Assistant
| has been making waves in the home automation sphere. Turns
| out people don't like buying overpriced devices only to have
| to pay a $20 subscription, and find out their devices don't
| talk to each other, upload footage inside of their homes to
| the cloud, and then get bricked once the company selling them
| goes under and turns of the servers.
| __alexs wrote:
| You can dislike it but it doesn't make it less true and
| getting truer.
| jhanschoo wrote:
| You can likewise host models if you so choose. Still the
| vast majority of people use online services both for
| personal computing or for LLMs.
| api wrote:
| Things are moving this way because it's convenient and easy
| and most people today are time poor.
| torginus wrote:
| I think it has more to do with the 'common wisdom'
| dictating that this is the way to do it, as 'we've always
| done it like this'.
|
| Which might even be true, since cloud based software
| might offer conveniences that local substitutes don't.
|
| However this is not an inherent property of cloud
| software, its just some effort needs to go into a local
| alternative.
|
| That's why I mentioned Home Assistant - a couple years
| ago, smart home stuff was all the rage, and not only was
| it expensive, the backend ran in the cloud, and you
| usually paid a subscription for it.
|
| Nowadays, you can buy a local Home Assistant hub (or make
| one using a Pi) and have all your stuff only connect to a
| local server.
|
| The same is true for routers, NAS, media sharing and
| streaming to TV etc. You do need to get technical a bit,
| but you don't need to do anything you couldn't figure out
| by following a 20 minute Youtube video.
| rambambram wrote:
| This. And the hordes of people reacting with some
| explanation for why this is. The 'why' is not the point, we
| already know the 'why'. The point is that you can if you
| want. Might not be easy, might not be convenient, but
| that's not the point. No one has to ask someone else for
| permission to use other tech than big tech.
|
| The explanation of 'why' is not an argument. Big tech is
| not making it easy != it's impossible. Passive sufferers
| indeed.
|
| Edit: got a website with an RSS feed somewhere maybe? I
| would like to follow more people with a point of view like
| yours.
| api wrote:
| There are more PCs and serious home computing setups today
| than there were back then. There are just _way way way_ more
| casual computer users.
|
| The people who only use phones and tablets or only use
| laptops as dumb terminals are not the people who were buying
| PCs in the 1980s and 1990s, or they were they were not
| serious users. They were mostly non-computer-users.
|
| Non-computer-users have become casual consumer level computer
| users because the tech went mainstream, but there's still a
| massive serious computer user market. I know many people with
| home labs or even small cloud installations in their
| basements, but there are about as many of them as serious PC
| users with top-end PC setups in the late 1980s.
| npilk wrote:
| But for a broader definition of "personal computer", the
| number of computers we have has only continued to skyrocket -
| phones, watches, cars, TVs, smart speakers, toaster ovens,
| kids' toys...
|
| I'm with GP - I imagine a future when capable AI models
| become small and cheap enough to run locally in all kinds of
| contexts.
|
| https://notes.npilk.com/ten-thousand-agents
| seniorThrowaway wrote:
| Depending on how you are defining AI models, they already
| do. Think of the $15 security camera that can detect people
| and objects. That is AI model driven. LLM's are another
| story, but smaller, less effective ones can and do already
| run at the edge.
| WhyOhWhyQ wrote:
| I guess we're in the kim-1 era of local models, or is that
| already done?
| MSFT_Edging wrote:
| I look forward to a possibility where the dumb terminal is
| less centralized in the cloud, and more how it seems to work
| in the expanse. They all have hand terminals that seem to
| automatically interact with the systems and networks of the
| ship/station/building they're in. Linking up with local
| resources, and likely having default permissions set to
| restrict weird behavior.
|
| Not sure it could really work like that IRL, but I haven't
| put a ton of thought into it. It'd make our always-online
| devices make a little more sense.
| dzonga wrote:
| this -- chips are getting fast enough both arm n x86. unified
| memory architecture means we can get more ram on devices at
| faster throughput. we're already seeing local models - just
| that their capability is limited by ram.
| 8ytecoder wrote:
| Funny you would pick this analogy. I feel like we're back in
| the mainframe era. A lot of software can't operate without an
| internet connection. Even if in practice they execute some of
| the code on your device, a lot of the data and the heavyweight
| processing is already happening on the server. Even basic
| services designed from the ground up to be distributed and
| local first - like email ("downloading") - are used in this
| fashion - like gmail. Maps apps added offline support years
| after they launched and still cripple the search. Even git has
| GitHub sitting in the middle and most people don't or can't use
| git any other way. SaaS, Electron, ...etc. have brought us back
| to the mainframe era.
| thewebguyd wrote:
| It's always struck me as living in some sort of bizaro world.
| We now have these super powerful personal computers, both
| handheld (phones) and laptops (My M4 Pro smokes even some
| desktop class processors) and yet I use all this powerful
| compute hardware to...be a dumb terminal to someone else's
| computer.
|
| I had always hoped we'd do more locally on-device (and with
| native apps, not running 100 instances of chromium for
| various electron apps). But, it's hard to extract rent that
| way I suppose.
| ryandrake wrote:
| I don't even understand why computer and phone
| manufacturers even try to make their devices faster
| anymore, since for most computing tasks, the bottleneck is
| all the data that needs to be transferred to and from the
| modern version of the mainframe.
| charcircuit wrote:
| Consumers care about battery life.
| galaxyLogic wrote:
| And providers count their capacity in Giga-watts.
| eloisant wrote:
| Also when a remote service struggle I can switch to do
| something else. When a local software struggles it brings
| my whole device to its knees and I can't do anything.
| fainpul wrote:
| Yet manufacturers give us thinner and thinner phones
| every year (instead of using that space for the battery),
| and make it difficult to swap out batteries which have
| degraded.
| thewebguyd wrote:
| > make it difficult to swap out batteries which have
| degraded.
|
| That's the part that pisses me off the most. They all
| claim it's for the IP68, but that's bullshit. There's
| plenty of devices with removable backs & batteries that
| are IP68.
|
| My BlackBerry bold 9xxx was 10mm thin. the iPhone 17 Pro
| Max is 8.75. You aren't going to notice the 1.3mm of
| difference, and my BlackBerry had a user replaceable
| battery, no tools required just pop off the back cover.
|
| The BlackBerry was also about 100 grams lighter.
|
| The non-user removable batteries and unibody designs are
| purely for planned obsolescence, nothing else.
| tim333 wrote:
| There are often activities that do require compute
| though. My last phone upgrade was so Pokemon Go would
| work again, my friend upgrades for the latest 4k video or
| similar.
| OccamsMirror wrote:
| What's truly wild when you think about it, is that the
| computer on the other end is often less powerful than your
| personal laptop.
|
| I access websites on a 64gb, 16 core device. I deploy them
| to a 16gb, 4 core server.
| eloisant wrote:
| Yes, but your computer relies on dozens (hundreds?) of
| servers at any given time.
| BeFlatXIII wrote:
| yet I use all this powerful compute hardware to...animate
| liquid glass
| tbrownaw wrote:
| > _A lot of software can't operate without an internet
| connection_
|
| Or even physical things like mattresses, according to
| discussions around the recent AWS issues.
| EGreg wrote:
| I actually don't look forward to this period. I have always
| been for open source software and distributism -- until AI.
|
| Because if there's one thing worse than governments having
| nuclear weapons, it's everyone having them.
|
| It would be chaos. And with physical drones and robots coming,
| it woukd be even worse. Think "shitcoins and memecoins" but
| unlike those, you don't just lose the money you put in and you
| can't opt out. They'd affect everyone, and you can never escape
| the chaos ever again. They'd be posting around the whole
| Internet (including here, YouTube deepfakes, extortion,
| annoyance, constantly trying to rewrite history, get published,
| reputational destruction at scale etc etc), and constant armies
| of bots fighting. A dark forest.
|
| And if AI can pay for its own propagation via decentralized
| hosting and inference, then the chance of a runaway advanced
| persistent threat compounds. It just takes a few bad apples, or
| even practical jokers, to unleash crazy stuff. And it will
| never be shut down, just build and build like some kind of
| kessler syndrome. And I'm talking about with just CURRENT AI
| agent and drone technology.
| graeme wrote:
| We have a ton of good, small models. The issues are:
|
| 1. Most people don't have machines that can run even midsized
| local models well
|
| 2. The local models are nearly as good as the frontier models
| for a lot of use cases
|
| 3. There are technical hurdles to running local models that
| will block 99% of people. Even if the steps are: download LM
| Studio and download a model
|
| Maybe local models will get so good that they cover 99% of
| normal user use cases and it'll be like using your
| phone/computer to edit a photo. But you'll still need something
| to make it automatic enough that regular people use it by
| default.
|
| That said, anyone reading this is almost certainly technical
| enough to run a local model. I would highly recommend trying
| some. Very neat to know it's entirely run from your machine and
| seeing what it can do. LM Studio is the most brainless way to
| dip your toes in.
| loyalcinnamon wrote:
| As the hype is dying down it's becoming a little bit clearer
| that AI isn't like blockchain and might be actually useful
| (for non generative purposes at least)
|
| I'm curious what counts as a midsize model; 4B, 8B, or
| something larger/smaller?
|
| What models would you recommend? I have 12GB of vram so
| anything larger than 8B might be really slow, but i am not
| sure
| DSingularity wrote:
| It can depend on your use case. Are you editing a large
| code base and will thus make lots of completion requests
| with large contexts?
| riskable wrote:
| My take:
|
| Large: Requires >128GB VRAM
|
| Medium: 32-128GB VRAM
|
| Small: 16GB VRAM
|
| Micro: Runs on a microcontroller or GPUs with just 4GB of
| VRAM
|
| There's really nothing worthwhile for general use cases
| that runs in under 16GB (from my testing) except a grammar-
| checking model that I can't remember the name of at the
| moment.
|
| gpt-oss:20b runs on 16GB of VRAM and it's actually quite
| good (for coding, at least)! Especially with Python.
|
| Prediction: The day that your average gaming PC comes with
| 128GB of VRAM is the day developers will stop bothering
| with cloud-based AI services. gpt-oss:120b is nearly as
| good as gpt5 and we're still at the beginning of the AI
| revolution.
| FitchApps wrote:
| Try WebLLM - it's pretty decent and all in-browser/offline
| even for light tasks, 1B-1.5B models like
| Qwen2.5-Coder-1.5B-Instruct. I put together a quick prototype
| - CodexLocal.com but you can essentially a local nginx and
| use webllm as an offline app. Of course, you can just use
| Ollama / LM Studio but that would require a more technical
| solution
| jijji wrote:
| ollama and other peojects already make this possible
| raincole wrote:
| > "personal computing" period
|
| The period when you couldn't use Linux as your main OS because
| your organization asked for .doc files?
|
| No thanks.
| giancarlostoro wrote:
| I mean, people can self-host plenty off of a 5090, heck even
| Macs with enough RAM can run larger models that I can't run on
| a 5090.
| consumer451 wrote:
| I like to think of it more broadly, and that we are currently
| in the era of the first automobile. [0]
|
| LLMs are the internal combustion engine, and chatbot UIs are at
| the "horseless carriage" phase.
|
| My personal theory is that even if models stopped making major
| advancements, we would find cheaper and more useful ways to use
| them. In the end, our current implementations will look like
| the automobile pictured below.
|
| [0] https://group.mercedes-benz.com/company/tradition/company-
| hi...
| falcor84 wrote:
| I'm not a big google fan, but I really like the "Google AI Edge
| Gallery" android app [0]. In particular, I've been chatting
| with the "Gemma-3n-E2B-it" model when I don't have an internet
| connection, and it's really decent!
|
| [0]
| https://play.google.com/store/apps/details?id=com.google.ai....
| js8 wrote:
| Mainframes still exist, and they actually make a lot of sense
| from physics perspective. It's good idea to run transactions in
| a big machine rather than distributed, the latter is less
| energy efficient.
|
| I think the misconception is that things cannot be overpriced
| for reasons other than inefficiency.
| supportengineer wrote:
| Imagine small models on a cheap chip that can be added to
| anything (alarm clock, electric toothbrush, car keys...)
| wewewedxfgdf wrote:
| Most of the big services seem to waste so much time clunking
| through updating and editing files.
|
| I'm no expert but I can't help feeling there's lots of things
| they could be doing vastly better in this regard - presumably
| there is lots to do and they will get around to it.
| righthand wrote:
| More like AI's Diaper-Up Era aka AI's Analogy Era to Mask It's
| Shortcomings
| indigodaddy wrote:
| Funny how this guy thinks he knows exactly what's up with AI, and
| how "others" are "partly right and wrong." Takes a bit of hubris
| to be so confident. I certainly don't have the hubris to think I
| know exactly how it's all going to go down.
| fragmede wrote:
| But do you have the audacity to be wrong?
| indigodaddy wrote:
| Yeah that's interesting, good perspective
| confirmmesenpai wrote:
| takes a lot of hubris to be sure it's a bubble too.
| hitarpetar wrote:
| that's why I always identify the central position of any
| argument and take it. that way noone can accuse me of hubris
| yunnpp wrote:
| Spoken like a wise man.
| JohnnyMarcone wrote:
| You can take a position without being sure about it. e.g.
| "I'm at 70% that AI is a bubble."
| hitarpetar wrote:
| id probably go with 50% actually
| ivape wrote:
| The problem is that the bubble people are so unimaginative,
| similar to Krugman, that those who have any inkling of an
| imagination can literally feel like visionaries compared to
| them. I know I'm describing Dunning-Krueger, but so be it, the
| bubble people are very very wrong. It's like, man, they
| _really_ are unable to imagine a very real future.
| teaearlgraycold wrote:
| Almost everyone I hear calling our AI hype machine a bubble
| aren't claiming AI is a short term fluke. They're saying the
| marketing doesn't match the reality. The companies don't have
| the revenue they need. The model performance is hitting the
| top of the S curve. Essentially, this is the first big wave -
| but it'll be a while before the sea level rises permanently.
| bdangubic wrote:
| > marketing doesn't match the reality.
|
| true for every marketing _ever_
| an0malous wrote:
| It's not just a marketing stunt, it's a trillion dollar
| grift that VCs are going to try to dump off onto the
| public markets when the reality doesn't catch up to the
| hype fast enough
| bccdee wrote:
| I find the argument for the bubble to be extremely
| straightforward.
|
| Currently, investment into AI exceeds the dot-com bubble by a
| factor of 17. Even in the dot-com era, the early internet was
| already changing media and commerce in fundamental ways.
| November is the three-year anniversary of ChatGPT. How much
| economic value are they actually creating? How many people
| are purchasing AI-generated goods? How much are people paying
| for AI-provided services? The value created here would have
| to exceed what the internet was generating in 2000 by a
| factor of 17 (which seems excessive to me) to even reach
| parity with the dot-com bubble.
|
| "But think where it'll be in 5 years"--sure, and let's
| extrapolate that based on where it is now compared to where
| it was 3 years ago. New models present diminishing returns.
| 3.5 was groudbreaking; 4 was a big step forward; 5 is
| incremental. I won't deny that LLMs are useful, and they are
| certainly much more productized now than they were 3 years
| ago. But the magnitude of our collective investment in AI
| requires that a huge watershed moment be just around the
| corner, and that makes no sense. _The watershed moment was 3
| years ago._ The first LLMs created a huge amount of
| potential. Now we 're realizing those gains, and we're seeing
| some real value, but things are also tapering off.
|
| Surely we will have another big breakthrough some day--a
| further era of AI which brings us closer to something like
| AGI--but there's just no reason to assume AGI will crop up in
| 2027, and nothing less that that can produce the ROI that
| such enormous valuations will eventually, inexorably, demand.
| tim333 wrote:
| That "factor of 17" comes from an interest rate model that
| is unrelated to AI.
| lucaslazarus wrote:
| This is not true. Obviously the underlying effect is real
| but not nearly to this scale--for instance, neither the
| CPI nor the S&P500 are even remotely close to 17x higher
| than they were at the turn of the millennium.
| tim333 wrote:
| The source is a report written by Julien Garran based on
| the difference between actual interest rates and an idea
| of what they should be called the Wicksell spread.
| There's a summary here
| https://www.marketwatch.com/story/the-ai-bubble-
| is-17-times-...
|
| He figured there was a credit bubble like that around the
| time of the dot com bubble and now but the calculation if
| purely based on interest rates and the money can go into
| any assets - property, stocks, crypto etc. It's not AI
| specific.
|
| He explains it here https://youtu.be/uz2EqmqNNlE
|
| The Wicksell spread seems to have come from Wicksell's
| proposed 'natural rate of interest' detailed in his 1898
| book
|
| https://en.wikipedia.org/wiki/Knut_Wicksell#Interest_and_
| Pri...
| lucaslazarus wrote:
| I see, thank you!
| lucaslazarus wrote:
| I don't get why people find it so hard to understand that a
| technology can be value-additive and still be in a position
| of massive overinvestment. Every generation of Californians
| seeks to relive the 1848 gold rush, spending millions
| excavating rivulets for mere ounces of (very real!) gold.
| petesergeant wrote:
| Exactly this. The future impact of AI and the financial
| credibility of OpenAI as a business are completely
| distinct.
| m4rtink wrote:
| Not to mention the 1848 gold rush pretty destroyed the
| existing society, culture and businesses:
|
| https://en.wikipedia.org/wiki/California_gold_rus
|
| Not to mention thousands of native inhabitants getting
| killed or enslaved:
|
| https://en.wikipedia.org/wiki/California_genocide
| plastic3169 wrote:
| > Even in the dot-com era, the early internet was already
| changing media and commerce in fundamental ways.
|
| I agree that AI is overhyped but so was the early web. It
| was projected to do a lot of things "soon", but was not
| really doing that much 4 years in. I don't think the
| newspapers or commerce were really worried about it. The
| transformation of the business landscape took hold after
| the crash.
| sumedh wrote:
| > The value created here would have to exceed what the
| internet was generating
|
| Its precisely why these companies are investing so much,
| robots combined with AI will be creating that value.
| bccdee wrote:
| > Robots combined with AI will be creating that value.
|
| Will they? Within what timeframe? Because a bubble
| economy can't be told to "just hang on a few more years"
| forever. LLMs are normal technology; they will not
| suddenly become something they are not. There's no
| indication that general intelligence is right on the
| horizon.
| sumedh wrote:
| If you think they wont then you should short the stocks
| starting with Nvidia and get rich.
|
| > There's no indication that general intelligence is
| right on the horizon.
|
| You dont need general intelligence for all the tasks, if
| a robot can do some of those tasks with limited
| intelligence cheaper than a humnan, that is all
| corporations care about.
| askl wrote:
| > How many people are purchasing AI-generated goods?
|
| Probably a lot. I remember my mom recently showing me an
| AI-generated book she bought. And pretty much immediately
| refunded it. Not because it was AI, but because the content
| was trash.
| ivape wrote:
| What is AGI in your mind? Let's take someone who once upon
| a time was responsible for grading papers. As far as that
| person is concerned, AGI has arrived for their profession
| (it arrived nearly two years ago for them). You'll never be
| better than something that has read every book ever and can
| write better than you. AGI will come in tranches. Are you
| really going to hire that developer because you need extra
| manpower to stand up test coverage? No, so as far as that
| developer is concerned, AGI has arrived for that part of
| their professional career.
|
| The bet is not that there will be this one seminal moment
| of AGI where all the investment will make sense. The bet is
| that it has already showed up if you look for specific
| things and will continue to do so. I wouldn't bet against
| the idea that LLMs will introduce itself to all jobs, one
| at a time. Reddit moderators, for example, will meet AGI
| (as far as they know, their entire world being moderating)
| sooner than say, I don't know, a Radiologist.
|
| The universe of people getting paid to make CRUD apps is
| over. Many here will be introduced to AGI faster and
| sooner. Then it could be phone customer support
| representatives. It could show up for the face-to-face
| worker who is now replaced by a screen that can talk to
| customers (which already arrived yesterday, it's here).
| It'll appear erratic and not cohesive, unless you zoom out
| and see the contagion.
|
| ---
|
| Rome needed to recognize that the Barbarian hordes had
| arrived. Pay attention to all the places the invasion has
| landed. You can _pretend_ like the Vandals are not in your
| town for a little bit, sure, but eventually they will be
| knocking on many doors (most likely all doors). We 're in a
| time period of RADICAL transformation. There is no half-
| assing this conviction. Practicality will not serve us
| here.
| bccdee wrote:
| > Let's take someone who once upon a time was responsible
| for grading papers. As far as that person is concerned,
| AGI has arrived for their profession
|
| You're talking about TAs. I know TAs. Their jobs have not
| disappeared. They are not using AI to grade papers.
|
| > Are you really going to hire that developer because you
| need extra manpower to stand up test coverage?
|
| Yes. Unsupervised AI agents cannot currently replace
| developers. "Oh we'll get someone to supervise it"--yes,
| that person's job title is "developer" and they will be
| doing largely the same job they'd have done 5 years ago.
|
| > The universe of people getting paid to make CRUD apps
| is over.
|
| Tell that to all the people who get paid to make CRUD
| apps. Frankly, Airtable has done more to disrupt CRUD
| apps than AI ever did.
|
| > Rome needed to recognize that the Barbarian hordes had
| arrived.
|
| IDK what to tell you. All these jobs are still around.
| You're just fantasizing.
| visarga wrote:
| > Are you really going to hire that developer because you
| need extra manpower to stand up test coverage? No, so as
| far as that developer is concerned, AGI has arrived for
| that part of their professional career.
|
| That is exactly what you need in order to make AI useful.
| Even a baby needs to cry to signal its needs to parents,
| which are like ASI to it. AI working on a task lacks in 3
| domains: start, middle and finish.
|
| AI cannot create its own needs, they belong to the
| context where it is used. After we set AI to work, it
| cannot predict the outcomes of its actions unless they
| pass through your context and return as feedback. In the
| end, all benefits accumulate in the same context. Not to
| mention costs and risks - they belong to the context.
|
| The AI is a generalist, context is exactly what it lacks.
| And context is distributed across people, teams,
| companies. Context is non-fungible. You can't eat so I
| get satiated. Context is what drives AI. And testing is
| the core contextual activity when using AI.
| techblueberry wrote:
| It's a weird comparison since internet in the dial-up age was
| a bubble, are you saying the hype machine for AI is in fact
| smaller than the internet? Are you implying that AI will in
| fact grow that much more slowly and sustainably than the
| internet, despite trillions of investment?
|
| Do you think Sam Altman, Jeff Bezos, and Mark Zuckerberg are
| all wrong saying that we're in a bubble? Do they "lack
| imagination?"
|
| Also? What do I need imagination for, isn't that what AI does
| now?
| timeinput wrote:
| That's a sharp and layered question, and I think you're
| cutting right to the heart of the current tension around
| AI.
| Razengan wrote:
| How about a vague prediction that covers all scenarios? XD
|
| *ahem* It's gonna be like every other tool/societal paradigm
| shift like the smartphone before this, and planes/trains/cars/s
| hips/factories/electricity/oil/steam/iron/bronze etc. before
| that:
|
| * It'll coalesce into the hands of a few corporations.
|
| * Idiots in governments won't know what the fuck to do with it.
|
| * Lazy/loud civvies will get lazier/louder through it.
|
| * There'll be some pockets of individual creativity and
| freedom, like open source projects, that will take varying
| amounts of time to catch on in popularity or fade away to
| obscurity.
|
| * One or two killer apps that seem obvious but nobody thought
| of, will come out of nowhere from some nobody.
|
| * Some groups will be quietly working away using it to enable
| the next shift, whether they know it or not.
|
| * Aliens will land turning everything upside down. (I didn't
| say when)
| Razengan wrote:
| Forgot:
|
| * Militaries will want to kill everyone with it.
| sailfast wrote:
| I recall the unit economics making sense for all these other
| industries and bubbles (short of maybe tulips, which you could
| plant...) . Sure there were over-valuation bubbles because of
| speculatory demand, but right now the assumption seems to be
| "first to AGI wins" but that... may not happen.
|
| The key variable for me in this house of cards is how long folks
| will wait before they need to see their money again, and whether
| these companies will go in the right direction long enough given
| these valuations to get to AGI. Not guaranteed and in the
| meantime society will need to play ball (also not a guarantee)
| kaoD wrote:
| > If you told someone in 1995 that within 25 years [...] most
| people would find that hard to believe.
|
| That's not how I remember it (but I was just a kid so I might be
| misremembering?)
|
| As I remember (and what I gather from media from the era) late
| 80s/early 90s were hyper optimistic about tech. So much so that I
| distinctly remember a ?german? TV show when I was a kid where
| they had what amounts to modern smartphones, and we all assumed
| that was right around the corner. If anything, it took too damn
| long.
|
| Were adults outside my household not as optimistic about tech
| progress?
| michaelbuckbee wrote:
| To your point, AT&T's "You Will" commercials started airing in
| 1993 and present both an optimistic and fairly accurate view of
| what the future would look like.
|
| https://www.youtube.com/watch?v=RvZ-667CEdo
| iyn wrote:
| I didn't know about these ads, thanks for sharing! Can't
| imagine how people reacted to that when they aired -- the
| things they described sound so "normal" today, I wonder if it
| was seen as far fetched, crazy or actually expected.
| EA wrote:
| In these commercials, it wasn't the technology itself but
| the ease of access and visualized integration of these
| technologies into the commoners' everyday lives that was
| the new idea.
| skywhopper wrote:
| I was in my late teens at the time. My memory is that I
| felt like the tech was definitely going happen in some
| form, but I rolled my eyes heavily at the idea that AT&T
| was going to be the company to do make it happen.
|
| If you're unfamiliar, the phone connectivity situation in
| the 80s and 90s was messy and piecemeal. AT&T had been
| broken up in 1982 (see
| https://www.historyfactory.com/insights/this-month-in-
| busine...), and most people had a local phone provider and
| AT&T was the default long-distance provider. MCI and Sprint
| were becoming real competition for AT&T at the time of
| these commercials.
|
| Anyway, in 1993 AT&T was still the crusty old monopoly on
| most people's minds, and the idea that they were going to
| be the company to bring any of these ideas to the market
| was laughable. So the commercials were basically an image
| play. The only thing most people bought from AT&T was long
| distance service, and the main threat was customers leaving
| for MCI and Sprint. The ads memorable for sure, but I don't
| think they blew anyone's mind or made anyone stay with
| AT&T.
| mercutio2 wrote:
| We're the same age, and I had exactly the same reaction.
|
| AT&T and the baby bells were widely loathed (man I hated
| Ameritech...), so the idea they would extend their
| tentacles in this way was the main thing I reacted to.
| The technology seemed straightforwardly likely with
| Dennard scaling in full swing.
|
| I thought it would be banks that owned the customer
| relationship, not telcos or Apple (or non-existent
| Google), but the tech was just... assume
| miniaturization's plateau isn't coming for a few decades.
|
| Still pretty iconic/memorable, though!
| Arn_Thor wrote:
| Wow, that genuinely gave me goosebumps. It is incredible to
| live in a time where so much of that hopeful optimism came to
| pass.
| runarberg wrote:
| That's how I remember it too. The video is from 1999, during
| the height of the dot-com bubble. These experts are predicting
| that within 10 years the internet will be on your phone, and
| that people will be using their phones as credit cards and the
| phone company would manage the transaction, the prediction
| actually comes pretty close to the prediction made by bitcoin
| enthusiasts.
|
| https://bsky.app/profile/ruv.is/post/3liyszqszds22
|
| Note that this is the state TV broadcasting this in their main
| news program. The most popular daily show in Iceland.
| 0xbadcafebee wrote:
| Still waiting on my flying car.
| qayxc wrote:
| To be fair, that has been a Sci-Fi trope for at least 130
| years and predates the invention of the car itself (e.g.
| personal wings/flying horse -> flying ship -> personal
| balloon -> flying automobile). So countless generations have
| been waiting for that :)
| jeffhuys wrote:
| Might not be waiting for long.
| ehnto wrote:
| There's no way I'm trusting the current driving cohort with
| a third dimension. If we get flying cars and they aren't
| completely autonomous, I am moving to the sticks.
| iyn wrote:
| Self-flying cars? I wonder if it's actually easier to
| have autonomous vehicles operating in 3D than in "2D".
| Razengan wrote:
| Indeed, AI now is what people in the 1980s thought computers
| would be doing in 2000.
| skywhopper wrote:
| Except people thought it would get basic facts right.
| visarga wrote:
| We can't decide whether to take a vaccine even when we are
| dying left and right. And we have brains, not chips inside.
| Razengan wrote:
| Or hell, as Neil deGrasse Tyson said in a video, just put
| 2 lines with different arrows at the ends then our brains
| can't even tell if they're the same size!
|
| https://en.wikipedia.org/wiki/Muller-Lyer_illusion
| bitwize wrote:
| Recently, in my city, the garbage trucks started to come equipped
| with a device I call "The Claw" (think Toy Story). The truck
| drives to your curb where your bin is waiting, and then The Claw
| extends, grasps the bin, lifts it into the air and empties the
| contents into the truck before setting it down again.
|
| The Claw allows a garbage truck to be crewed by one man where it
| would have needed two or three before, and to collect garbage
| much faster than when the bins were emptied by hand. We don't
| know what the economics of such automation of (physical) garbage
| collection portend in the long term, but what we do know is that
| sanitation workers are being put out of work. "Just upskill," you
| might say, but until Claw-equipped trucks started appearing on
| the streets there was no _need_ to upskill, and now that they 're
| here the displaced sanitation workers may be in jeopardy of being
| unable to afford to feed their families, let alone find and train
| in some new marketable skill.
|
| So no, we're in the The Claw era of AI, when business finds a new
| way to funge labor with capital, devaluing certain kinds of labor
| to zero with no way out for those who traded in such labor. The
| long-term implications of this development are unclear, but the
| short-term ones are: more money for the owner class, and some
| people are out on their ass without a safety net because this is
| Goddamn America and we don't brook that sort of commie nonsense
| here.
| sjsdaiuasgdia wrote:
| FYI, this kind of garbage truck has been around for >50 years
| [0], so any wide-scale impact on employment from this
| technology has likely already settled out.
|
| The waste collection companies in my area don't use them
| because it's rural and the bins aren't standardized. The side
| loaders don't work for all use cases of garbage trucks.
|
| [0] https://en.wikipedia.org/wiki/Garbage_truck
|
| >In 1969, the city of Scottsdale, Arizona introduced the
| world's first automated side loader. The new truck could
| collect 300 gallon containers in 30 second cycles, without the
| driver exiting the cab
| yapyap wrote:
| Big bias shining through in comparing AI to the internet.
|
| Because we all know how essential the internet is nowadays.
| slackr wrote:
| There's a big difference between the fibre infrastructure left by
| the dotcom crash, and the GPUs that AI firms will leave behind.
| slackr wrote:
| There's a big difference between the fibre infrastructure left by
| the dotcom crash, and the GPUs that AI firms will leave behind
| gnarlouse wrote:
| I feel like this article is too cute. The internet, and the state
| of the art of computing in general has been driven by one thing
| and one thing alone: Moore's Law. In that very real sense, it
| means that the semiconductor and perhaps more generally even just
| TSMC is responsible for the rise of the internet and the success
| of it.
|
| We're at the end of Moore's Law, it's pretty reasonable to
| assume. 3nm M5 chips means there are--what--a few hundred silicon
| atoms per transistor? We're an order of magnitude away from .2 nm
| which is the diameter of a single silicon atom.
|
| My point is, 30 years have passed since dial up. That's a lot of
| time to have exponentially increasing returns.
|
| There's a lot of implicit assumption that "it's just possible" to
| have a Moore's Law for the very concept of intelligence. I think
| that's kinda silly.
| leptons wrote:
| Moore's law has very little to do with the physical size of a
| single transistor. It postulates that the speed and capability
| of computers will double every few years. Miniaturization is
| one way to get that increase, but there are other ways.
|
| >The internet, and the state of the art of computing in general
| has been driven by one thing and one thing alone: Moore's Law.
|
| You're wrong here... the one thing driving the internet and
| start of the art computing _is money_. Period. It wouldn 't
| matter if Moore never existed, and his law was never a thing,
| money would still be driving technology to improve.
| gnarlouse wrote:
| > The one thing driving the internet and state of the art
| computing is money
|
| You're kind of separating yin from yang and pretending that
| one begot the other. The reason so much money flooded into
| chip fab was because compute is one of the few technologies
| (the only technology?) with recursive self improvement
| properties. Smaller chip fab leads to more compute, which
| enabled smaller chip fabs though research modeling. Sure:
| _and it 's all because humans want to do business faster_.
| But TSMC literally made chips the business and proved out the
| pure play foundry business model.
|
| > Even if Moore's Law was never a thing
|
| Then arguably in that universe, we would have eventually hit
| a ceiling, which is precisely the point I'm trying to make
| against the article: it's a little silly to assume there's an
| infinite frontier of exponential improvement available just
| because that was the prior trend.
|
| > Moore's Law has very little to do with the physical size of
| a single transistor
|
| I mean it has everything to do with the physical size of a
| single transistor, precisely because of that recursive self
| improvement phenomenon. In a universe where moore's law
| doesn't exist, in 2025 we wouldn't be on 3nm production dies,
| and compute scale would have capped off decades ago. Or
| perhaps even a lot of other weird physical things would
| probably be different, like maybe macroscopic quantum
| phenomena or just an entire universe that is one sentient
| blob made from the chemical composition of cheeto dust.
| leptons wrote:
| Transistor size is not the only metric that matters in
| computer speed. Maybe you weren't around when 1MHz CPUs
| were considered fast. Then there were 8Mhz, then 16MHz,
| then 25MHz, and soon enough it was 250Mhz, then it jumped
| up to 1GHz, and now we're seeing 4GHz and faster. We're
| probably not at the end of the GHz that can be achieved.
| Chip dies got bigger, too. Way bigger. It doesn't matter if
| a single transistor can't be shrunk smaller than 3nm if the
| chip size can be increased. We've seen this in Cerebras
| Wafer Scale Engine (WSE), which is 12 inches by 12 inches
| and contains _4 trillion transistors_. And then there 's
| the possibility of 3D chip design - if you can't go wider,
| build taller - but the main problem with all of this is
| heat and power. More transistors, more GHz, larger dies,
| all means more heat - and heat is the real limiting factor.
| If heat and power weren't a concern then we'd have far
| faster computers.
|
| But all of these advancements in processing power are
| driven by money, not by some made-up "law" that sounds nice
| on paper but has little to do with the real world. Sorry
| but "Moore's law" isn't really a "law" in any way like the
| laws of physics.
| gnarlouse wrote:
| You've completely ignored my arguments, you're hung up on
| one technicality, and now you're just being derisive. I
| literally have a degree in computer engineering. I'm well
| aware there's more than just semiconductor size. I'm
| aware of 3D chip fabs. I'm well aware of clock speed as a
| dimension. I'm also well fucking aware that moore's law
| is not a physical law.
|
| My whole fucking point is that neither are the AI scaling
| laws.
|
| Please stop talking to me.
| leptons wrote:
| >The internet, and the state of the art of computing in
| general has been driven by one thing and one thing alone:
| Moore's Law
|
| Your original comment was downvoted quite a bit. Because
| you're wrong about this statement, and it sticks out more
| than anything else you wrote.
|
| >Please stop talking to me.
|
| Likewise.
| ares623 wrote:
| My head canon is that the thing that preemptively pops the bubble
| is Apple coming out and saying, very publicly, that AI is a dead
| end, and they are dropping it completely (no more half assed
| implicit promises).
|
| And not just that, they come out with an iPhone that has _no_
| camera as an attempt to really distance themselves from all the
| negative press tech (software and internet in particular) has at
| the moment.
| ladberg wrote:
| Do you know a single person who'd buy an iPhone without a
| camera? I don't
| GalaxyNova wrote:
| That's what they used to say about mobile phones with no
| keyboards :))
| l9o wrote:
| Keyboards were replaced with a touch screen alternative
| that effectively does the same job though. What is the
| alternative to a camera? Cameras are way too useful on a
| mobile device for anyone to even consider dropping them
| IMO.
| xwolfi wrote:
| He's obviously jesting
| l9o wrote:
| Oh. Woooosh. Thanks for still being nice about it (-:
| efskap wrote:
| AI image generators
| krackers wrote:
| Maybe not as an iphone, but they could drop the camera and
| cellular and make an ipod touch.
| NemoNobody wrote:
| That would require people that know about AI to actually choose
| to cancel it - which nobody that actually knows what AI can do,
| would ever actually do.
|
| The Apple engineers, with their top level unfettered access to
| the best Apple AI - they'll convince shareholders to fund it
| forever, even if normal people never catch on.
| swyx wrote:
| _Apple at AI_ is a dead end because Apple sucks at AI, not
| because its anything about AI
| idiotsecant wrote:
| I would go so far as to say we are still in the _computing_ dial-
| up era. We 're at the tail end, maybe - we don't write machine
| code any longe, mostly, and we've abstracted up a few levels but
| we're still writing code. Eventually _computing_ is something
| that will be everywhere, like air, and natural language
| interfaces will be nearly exclusively how people interact with
| computing machines. I don 't think the idea of 'writing software'
| is something that will stick around, I think we're in a very
| weird and very brief little epoch where that is a thing.
| byronic wrote:
| how much does the correction here hew to making an AI model just
| look like standardized API calls with predictable responses? If
| you took away all the costs (data centers, water consumption,
| money, etc) I still wouldn't use an LLM as a first choice because
| it's wrong enough of the time to make it useless -- I have to
| verify everything it says, which is how I would have approached a
| task in the first place. If we put that analogy into
| manufacturing, it's "I have to QA everything off of the line
| _without exception_ and I get frequent material waste"
|
| If you make the context small enough, we're back at /api/create
| /api/read /api/update /api/delete; or, if you're old-school, a
| basic function
| jdkee wrote:
| Reads like it was written by ChatGPT.
| felixfurtak wrote:
| People keep comparing the AI boom to the Dotcom bubble. They're
| wrong. Others point to the Railway Mania of the 1840s -- closer,
| but still not quite right.
|
| The real parallel is Canal Mania -- Britain's late-18th-century
| frenzy to dig waterways everywhere. Investors thought canals were
| the future of transport. They were, but only briefly.
|
| Today's AI runs on GPUs -- chips built for rendering video games,
| not thinking machines. Adapting them for AI is about as sensible
| as adapting a boat to travel across land. Sure, it moves -- but
| not quickly, not cheaply, and certainly not far.
|
| It works for now, but the economics are brutal. Each new model
| devours exponentially more power, silicon, and capital. It just
| doesn't scale.
|
| The real revolution will come with new, hardware built for the
| job (that hasn't been invented yet) -- thousands of times faster
| and more efficient. When that happens, today's GPU farms will
| look like quaint relics of an awkward, transitional age: grand,
| expensive, and obsolete almost overnight.
| l9o wrote:
| I think specialized hardware will emerge for specific proven
| workloads (transformer inference, for example), but GPUs won't
| become obsolete. They'll remain the experimentation platform
| for new architectures. You need flexibility to discover what's
| worth building custom silicon for.
|
| Think 3D printers versus injection molds: you prototype with
| flexibility, then mass-produce with purpose-built tooling.
| We've seen this pattern before too. CPUs didn't vanish when
| GPUs arrived for graphics. The canal analogy assumes wholesale
| replacement. Reality is likely more boring: specialization
| emerges and flexibility survives.
| roommin wrote:
| Sure, but your R&D infrastructure isn't going to be 1.5
| trillion dollars.
| realaaa wrote:
| I think it'll be a combination of hardware of course, but also
| better software - surely there is a better way of doing this
| (like our brains do) which will eventually require less power
| fhennig wrote:
| > Today's AI runs on GPUs -- chips built for rendering video
| games, not thinking machines. Adapting them for AI is about as
| sensible as adapting a boat to travel across land.
|
| A GPU is fundamentally just a chip for matrix operations, and
| that's good for graphics but also for "thinking machines" as we
| currently have them. I don't think it's like a boat traveling
| on land at all.
| ozgung wrote:
| Another definition: A modern GPU a general purpose computer
| that can make parallelized and efficient computations. It's
| optimized to run limited number of operations but on large
| number of data points.
|
| This happens to be useful both for graphics (same "program"
| running on on huge number of pixels/vertices) and neural
| networks (same neural operations on huge number of
| inputs/activations)
| aurareturn wrote:
| Nvidia's enterprise GPUs having nothing to do with graphics
| anymore except for the name.
| felixfurtak wrote:
| GPUs are massively parallel, sure, but they still have a
| terrible memory architecture and are difficult to program
| (and are still massively memory constrained). It's only
| NVidia's development in cuda that made it even feasible to
| create decent ML models on GPUs.
| hnburnsy wrote:
| So weird, I asked AI (Grok) just yesterday how far along we are
| towards post-scarcity and it replied...
|
| >We're in the 1950s equivalent of the internet boom -- dial-up
| modems exist, but YouTube doesn't.
| atq2119 wrote:
| Which is ironic, considering that the 1950s were long before
| the internet boom. The internet didn't even exist yet, let
| alone dial-up modems.
| buu700 wrote:
| I was curious and looked this up:
| https://en.wikipedia.org/wiki/Modem#1950s
|
| _Mass production of telephone line modems in the United
| States began as part of the SAGE air-defense system in 1958,
| connecting terminals at various airbases, radar sites, and
| command-and-control centers to the SAGE director centers
| scattered around the United States and Canada._
|
| _Shortly afterwards in 1959, the technology in the SAGE
| modems was made available commercially as the Bell 101, which
| provided 110 bit /s speeds. Bell called this and several
| other early modems "datasets"._
| gizajob wrote:
| Great analysis but one thing overlooked is that current gen
| advanced AI could in five or ten years (or less) be run from the
| smartphone or desktop, which could negate all the capex from the
| hyperscalers and also Nvidia, which presents a massive target for
| competitors right now. The self same AI revolution we're seeing
| created right now could take itself down if AI tooling becomes
| widespread.
| melagonster wrote:
| If this happen, everyone's computer will contain one Nvidia
| GPU.
| port3000 wrote:
| Not really. Apple is a very strong competitor here.
| 0xbadcafebee wrote:
| It's clear that AI is useful. It's not yet clear how useful. Hype
| has always obscured real value, and nobody knows the real value
| until the hype cycle completes.
|
| What is clear, is that we have strapped a rocket to our asses,
| fueled with cash and speculation. The rocket is going so fast we
| don't know where we're going to land, or if we'll land softly, or
| in a very large crater. The past few decades have examples of
| craters. Where there are potential profits, there are people who
| don't mind crashing the economy to get them.
|
| I don't understand why we're allowing this rocket to begin with.
| Why do we _need_ to be moving this quickly and dangerously? Why
| do we _need_ to spend trillions of dollars overnight? Why do we
| _need_ to invest half the fucking stock market on this brand new
| technology as fast as we can? Why can 't we develop it in a way
| that isn't insanely fast and dangerous? Or are we incapable of
| decisions not based on greed and FOMO?
| xwolfi wrote:
| Who is "we" ? I certainly don't spend trillions on frivolities.
| I think the Saudis via Softbank do, and these people build fake
| cities in the desert, they are by definition dumb money.
|
| They earn so much from oil and are so keenly aware this will
| stop, they'd rather spend a trillion on a failure, than keep
| that cash rotting away with no future investment.
|
| No project, no country, can swallow the Saudi oil money like
| Sam Altman can. So, they're building enormous data centers with
| custom nuclear plants and call that Stargate to syphon that
| dumb money in. It's the whole business model of Softbank: find
| a founder whose hubris is as big as Saudi stupidity.
| hi_hi wrote:
| The article seems well researched, has some good data, and is
| generally interesting. It's completely irrelevant to the reality
| of the situation we are currently in with LLMs.
|
| It's falling into the trap of assuming we're going to get to the
| science fiction abilities of AI with the current software
| architectures, and within a few years, as long as enough money is
| thrown at the problem.
|
| All I can say for certain is that all the previous financial
| instruments that have been jumped on to drive economic growth
| have eventually crashed. The dot com bubble, credit instruments
| leading to the global financial crisis, the crypto boom, the
| current housing markets.
|
| The current investments around AI that we're all agog at are just
| another large scale instrument for wealth generation. It's not
| about the technology. Just like VR and BioTech wasn't about the
| technology.
|
| That isn't to say the technology outcomes aren't useful and
| amazing, they are just independant of the money. Yes, there are
| Trillions (a number so large I can't quite comprehend it to be
| honest) being focused into AI. No, that doesn't mean we will get
| incomprehensible advancements out the other end.
|
| AGI isn't happening this round folks. Can hallucinations even be
| solved this round? Trillions of dollars to stop computers lying
| to us. Most people where I work don't even realise hallucinations
| are a thing. How about a Trillion dollars so Karen or John stop
| dismissing different viewpoints because a chat bot says something
| contradictory, and actually listen? Now that would be worth a
| Trillion dollars.
|
| Imagine a world where people could listen to others outside of
| their bubble. Instead they're being given tools that re-inforce
| the bubble.
| DanHulton wrote:
| Indeed, this could be AI's fusion energy era, or AI's VR era,
| or even AI's FTL travel era.
| nickphx wrote:
| Dial-up was actually useful though.
| skywhopper wrote:
| Really tired of seeing the story about how, "sure Worldcom et al
| went bankrupt but their investments in fiber optics gave us the
| physical infrastructure of the Internet today."
|
| I mean, sort of, but the fiber optics in the ground have been
| upgraded several by orders of magnitude of its original capacity
| by replacing the transceivers on either end. And the fiber itself
| has lasted and will continue to last for decades.
|
| Neither of those properties is true of the current datacenter/GPU
| boom. The datacenter buildings may last a few decades but the
| computers and GPUs inside will not and they cannot be easily
| amplified in their value as the fiber in the ground was.
| blazespin wrote:
| KIMI just proposed linear attention. I mean, one breakthrough,
| and blammo, the whole story changes.
| ecommerceguy wrote:
| I'm getting ai fatigue. It's ok to rewrite quick emails that i'm
| having brain farts on but anything deep it just sucks. I
| certainly can't see paying for it.
| aurareturn wrote:
| Weird because AI has been solving hard problems for me. Even
| finding solutions that I couldn't find myself. Ie. sometimes my
| brain cant wrap around a problem, I throw it to AI and it
| perfectly solves it.
|
| I pay for chatgpt plus and github copilot.
| leptons wrote:
| It is weird that AI is solving hard problems for you. I can't
| get it to do the most basic things consistently, most of the
| time it's just pure garbage. I'd never pay for "AI" because
| it wastes more of my time than it saves. But I've never had a
| problem wrapping my head around a problem, I solve problems.
|
| I'm curious what kind of problem your "brain cant wrap
| around", but the AI could.
| praveen9920 wrote:
| In my case, Learning new stuff is one place I see AI
| playing major role. Especially the academic research which
| is hard to start if you are newbie but with AI I can start
| my research, read more papers with better clarity.
| aurareturn wrote:
| I'm curious what kind of problem your "brain cant wrap
| around", but the AI could.
|
| One of the most common use cases is that I can't figure out
| why my SQL statement is erroring or doesn't work the way it
| should. I throw it into ChatGPT and it usually solves it
| instantly.
| Wilduck wrote:
| Is that a "hard problem" though? Really?
| aurareturn wrote:
| Yes. To me, it is. Sometimes queries I give it are
| 100-200 lines long. Sure, I can solve it eventually but
| getting an "instant" answer that is usually correct?
| Absolutely priceless.
|
| It's pretty common for me to spend a day being stuck on a
| gnarly problem in the past. Most developers have. Now I'd
| say that's extremely rare. Either an LLM will solve it
| outright quickly or I get enough clues from an LLM to
| solve it efficiently.
| navigate8310 wrote:
| Usually the term, "hard problem", is reserved for
| problems that require novel solutions
| aurareturn wrote:
| It is not. It's relative to the subject.
|
| In this case, the original author stated that AI only
| good for rewriting emails. I showed a much harder problem
| that AI is able to help me with. So clearly, my problem
| can be reasonably described as "hard" relative to
| rewriting emails.
| IgorPartola wrote:
| Have you ever read Zen and the Art of Motorcycle
| Maintenance? One of the first examples in that book is
| how when you are disassembling a motorcycle any one bolt
| is trivial until one is stuck. Then it becomes your
| entire world for a while as you try to solve this problem
| and the solution can range from trivial to amazingly
| complex.
|
| You are using the term "hard problem" to mean something
| like solving P = NP. But in reality as soon as you step
| outside of your area of expertise most problems will be
| hard for you. I will give you some examples of things you
| might find to be hard problems (without knowing your
| background):
|
| - what is the correct way to frame a door into a
| structural exterior wall of a house with 10 foot ceilings
| that minimized heat transfer and is code compliant.
|
| - what is the correct torque spec and sequence for a
| Briggs and Stratton single cylinder 500 cc motor.
|
| - how to correctly identify a vintage Stanley hand plane
| (there were nearly two dozen generations of them, some
| with a dozen different types), and how to compare them
| and assess their value.
|
| - how to repair a cracked piece of structural plastic.
| This one was really interesting for me because I came up
| with about 5 approaches and tried two of them before
| asking an LLM and it quickly explained to me why none of
| the solutions I came up with would work with that
| specific type of plastic (HDPE is not something you can
| glue with most types of resins or epoxies and it turns
| out plastic welding is the main and best solution). What
| it came up with was more cost efficient, easier, and
| quicker than anything I thought up.
|
| - explaining why mixing felt, rust, and CA glue caused an
| exothermal reaction.
|
| - find obscure local programs designed to financially
| help first time home buyers and analyze their eligibility
| criteria.
|
| In all cases I was able to verify the solutions. In all
| cases I was not an expert on the subject and in all cases
| for me these problems presented serious difficulty so you
| might colloquially refer to them as hard problems.
| m4rtink wrote:
| If you have 200 line SQL queries you have a whole other
| kind of problem.
| r0x0r007 wrote:
| not unless you are working on todo apps.
| hshdhdhehd wrote:
| TODO: refactor the schema design.
| hshdhdhehd wrote:
| Problem with this is people will accept tech debt and
| slow query's so long as the LLM can make sense of it
| (allegedly!).
|
| So the craft is lost. Making that optimised query or
| simplifying the solution space.
|
| No one will ask "should it be relational even?" if the
| LLM can spit out sql then move on to next problem.
| aurareturn wrote:
| So why not ask the LLM if it should be relational and
| provide the pros and cons?
|
| Anyway, I'm sure people have asked if we should be
| programming in C rather than Assembly to preserve the
| craft.
| GoatInGrey wrote:
| Surely you understand the difference between not knowing
| how to do anything by yourself and only knowing how to
| use high-level languages?
| hshdhdhehd wrote:
| That is like using the LLM like a book. Sure do that! But
| human still needs to understand and make the decisions.
| Draiken wrote:
| You might be robbing yourself of the opportunity to learn
| SQL for real by short-cutting to a solution that might
| not even be correct one.
|
| I've tried using LLMs for SQL and it fails at exactly
| that: complexity. Sure it'll get the basic queries right,
| but throw in anything that's not standard every day SQL
| into it and it'll give you solutions that are not great
| really confidently.
|
| If you don't know SQL enough to figure out these issues
| in the first place, you don't know if the solutions the
| LLM provides are actually good or not. That's a real bad
| place to be in.
| leptons wrote:
| What happens when these "AI" companies start charging you
| what it _really_ costs to run the "AI"? You'd very
| likely balk at it and have to learn SQL yourself. Enjoy
| it while it lasts, I guess?
| enraged_camel wrote:
| I work with some very complex queries (that I didn't
| write), and yeah, AI is an absolute lifesaver, especially
| in troubleshooting situations. What used to take me hours
| now takes me minutes.
| Daz912 wrote:
| Sounds like you're not capable of using AI correctly, user
| error.
| lompad wrote:
| "It can't be that stupid, you must be prompting it
| wrong!"
|
| Sigh.
| leptons wrote:
| Sorry, I'm not taking a comment like this from a 2-hour
| old account seriously. _You don 't know me at all._
| sumedh wrote:
| Which model are you using?
| weregiraffe wrote:
| >Weird because AI has been solving hard problems for me.
|
| Examples or it didn't happen.
| DecentShoes wrote:
| Can you give some examples??
| jaggederest wrote:
| Calculate the return on investment for a solar installation
| of a specified size on a specified property based on the
| current dynamic prices of all of the panels, batteries,
| inverter, and balance of system components, the current
| zoning and electrical code, the current cost of capital,
| the average insolation and weather taking into account
| likely changes in weather in the future as weather
| instability increases due to more global increase of
| temperature, the chosen installation method and angle, and
| the optimal angle of the solar panels if adjusted monthly
| or quarterly. Now do a Manual J calculation to determine
| the correct size of heat pump in each section of that
| property, taking into account number of occupants,
| insulation level, etc.
|
| ChatGPT is currently the best solar calculator on the
| publicly accessible internet and it's not even close. It'll
| give you the internal rate of return, it'll ask all the
| relevant questions, find you all the discounts you can take
| in taxes and incentives, determine whether you should pay
| the additional permitting and inspection cost for net
| metering or just go local usage with batteries, size the
| batteries for you, and find some candidate electricians to
| do the actual installation once you acquire the equipment.
|
| Edit: My guess is that it'd cost several thousand dollars
| to hire someone to do this for you, and it'll save you
| probably in the $10k-$30k range on the final outcomes,
| depending on the size of system.
| m4rtink wrote:
| Any way to tell if the convincing final numbers it told
| you are real or halucinated ?
| jaggederest wrote:
| I checked them carefully myself with various other tools.
| It was using python to do the math so I trust it to a
| single standard deviation at least.
| mb7733 wrote:
| Standard deviation of what
| caminante wrote:
| I'm lost too. Financials are technology agnostic.
|
| They probably meant that they could read (and trace) the
| logic in Python for correctness.
| visarga wrote:
| Solve the same task with ChatGPT, Gemini and Claude. If
| they agree, you can be reasonably sure.
| aprilthird2021 wrote:
| My God, the first example is having an AI do math, then
| he says "Well I trust it to a standard deviation"
|
| So it's literally the same as googling "what's the
| ballpark solar installation cost for X in Y area"
| unbelievable, and people pay $20+ per month for this
| bdangubic wrote:
| $200 :)
| IgorPartola wrote:
| As an LLM-skeptic who got a Claude subscription, the free
| models are both much dumber and configured for low latency and
| short dumb replies.
|
| No it won't replace my job this year or the next, but what
| Sonnet 4.5 and GPT 5 can do compared to e.g. Gemini Flash 2.5
| is incredible. They for sure have their limits and do
| hallucinate quite a bit once the context they are holding gets
| messy enough but with careful guidance and context resets you
| can get some very serious work done with them.
|
| I will give you an example of what it can't do and what it can:
| I am working on a complicated financial library in Python that
| requires understanding nuanced parts of tax law. Best in class
| LLM cannot correctly write the library code because the core
| algorithm is just not intuitive. But it can:
|
| 1. Update all invocations of the library when I add non-
| optional parameters that in most cases have static values. This
| includes updating over 100 lengthy automated tests.
|
| 2. Refactor the library to be more streamlined and robust to
| use. In my case I was using dataclasses as the base interface
| into and out of it and it helped me split one set of classes
| into three: input, intermediate, and output while fully
| preserving functionality. This was a pattern it suggested after
| a changing requirement made the original interface not make
| nearly as much sense.
|
| 3. Point me to where the root cause of failing unit tests was
| after I changed the code.
|
| 4. Suggest and implement a suite of new automated tests (though
| its performance tests were useless enough for me to toss out in
| the end).
|
| 5. Create a mock external API for me to use based on available
| documentation from a vendor so I could work against something
| while the vendor contract is being negotiated.
|
| 6. Create comprehensive documentation on library use with
| examples of edge cases based on code and comments in the code.
| Also generate solid docstrings for every function and method
| where I didn't have one.
|
| 7. Research thorny edge cases and compare my solutions to
| commercial ones.
|
| 8. Act as a rubber ducky when I had to make architectural
| decisions to help me choose the best option.
|
| It did all of the above without errors or hallucinations. And
| it's not that I am incapable of doing any of it, but it would
| have taken me longer and would have tested my patience when it
| comes to most of it. Manipulating boilerplate or documenting
| the semantic meaning between a dozen new parameters that
| control edge case behavior only relevant to very specific
| situations is not my favorite thing to do but an LLM does a
| great job of it.
|
| I do wish LLMs were better than they are because for as much as
| the above worked well for me, I have also seen it do some
| really dumb stuff. But they already are way too good compared
| to what they should be able to do. Here is a short list of
| other things I had tried with them that isn't code related that
| has worked incredibly well:
|
| - explaining pop culture phenomenon. For example I had never
| understood why Dr Who fans take a goofy campy show aimed in my
| opinion at 12 year olds as seriously as if it was War and
| Peace. An LLM let me ask all the dumb questions I had about it
| in a way that explained it well.
|
| - have a theological discussion on the problem of good and evil
| as well as the underpinnings of Christian and Judaic mythology.
|
| - analyze in depth my music tastes in rock and roll and help
| fill in the gaps in terms of its evolution. It actually helped
| me identify why I like the music I like despite my tastes
| spanning a ton of genres, and specifically when it comes to
| rock, created one of the best and most well curated playlists I
| had ever seen. This is high praise for me since I pride myself
| on creating really good thematic playlists.
|
| - help answer my questions about woodworking and vintage tool
| identification and restoration. This stuff would have taken
| ages to research on forums and the answers would still be
| filled with purism and biased opinions. The LLM was able to cut
| through the bullshit with some clever prompting (asking it to
| act as two competing master craftsmen).
|
| - act as a writing critic. I occasionally like to write essays
| on random subjects. I would never trust an LLM to write an
| original essay for me but I do trust it to tell me when I am
| using repetitive language, when pacing and transitions are off,
| and crucially how to improve my writing style to take it from B
| level college student to what I consider to be close to
| professional writer in a variety of styles.
|
| Again I want to emphasize that I am still very much on the side
| of there being a marketing and investment bubble and that what
| LLMs can do being way overhyped. But at the same time over the
| last few months I have been able to do all of the above just
| out of curiosity (the first coding example aside). These are
| things I would have never had the time or energy to get into
| otherwise.
| boggsi2 wrote:
| You seem very thoughtful and careful about all this, but I
| wonder how you feel about the emergence of these abilities in
| just 3 years of development? What do you anticipate it will
| be capable of in the next 3 years?
|
| With no disrespect I think you are about 6-12 months behind
| SOTA here, the majority of recent advances have come from
| long running task horizons. I would recommend to you try some
| kind of IDE integration or CLI tool, it feels a bit unnatural
| at first but once you adapt your style a bit, it is
| transformational. A lot of context sticking issues get solved
| on their own.
| IgorPartola wrote:
| Oh I am very much catching up. I am suing Claude Code
| primarily, and also have been playing a bit with all the
| latest API goodies from OpenAI and Anthropic, like custom
| tools, memory use, creating my own continuous compaction
| algorithm for a specific workflow I tried. There is a lot
| happening here very fast.
|
| One thing that struck me: models are all trained starting
| 1-2 years ago. I think the training cutoff for Sonnet 4.5
| is like May 2024. So I can only imagine with is being
| trained and tested currently. And also these models are
| just so ahead of things like Qwen and Llama for the types
| of semi-complex non-coding tasks I have tried (like
| interpreting my calendar events), that it isn't even close.
| anonzzzies wrote:
| Well deep/hard is different I guess; I use it, day and night,
| for things I find boring. Boilerplate coding (which now is
| basically everything that's not pure business logic / logic /
| etc), corporate docs, reports etc. Everything I don't want to
| do is done by AI now. It's great. Outside work I use it for
| absolutely nothing though; I am writing a book, framework and
| database; that's all manual work (and I don't AI is good at any
| of those (yet)).
| mvdtnz wrote:
| It's more like the Segway era when people with huge stakes in
| Segway tried to convince the world we were about to rebuild
| entire cities around the new model.
| topranks wrote:
| Dial-up suggests he knows that many orders of magnitude of
| performance increase will happen, like with internet
| connectivity.
|
| I'm not sure that's a certainty.
| simultsop wrote:
| > MIT Professor, 1993' quote
|
| words to live by...
| RyanOD wrote:
| Every few years I find myself thinking, "Wow...the latest tech is
| amazing! We were in the stone ages just a few years ago."
|
| I don't expect that to cease in my lifetime.
| BoredPositron wrote:
| It took a long long time going from a walking bike to the one we
| know now. It's not going to be different from AI. Transformers
| will only get us so far and for the rest we need another tock.
| AGI is not going to happen with this generation of hardware. We
| are hitting spatial scaling limits in video and image generation
| and we are hitting limits with LLMs.
| _ink_ wrote:
| > The other claims that AI will create more jobs than it
| destroys.
|
| Maybe it's my bubble, but so far I didn't hear someone saying
| that. What kind of jobs should that be, given that both forms,
| physical and knowledge work, will be automatable sooner or later?
| joe_the_user wrote:
| I haven't seen that either.
|
| That claim just reads like he's concocted two sides for his
| position to be the middle ground between. I did that essays in
| high school but I try to be better than that now.
| teiferer wrote:
| > Regardless of which specific companies survive, this
| infrastructure being built now will create the foundation for our
| AI future - from inference capacity to the power generation
| needed to support it.
|
| Does that comparison with the fiber infra from the dotcom era
| really hold up? Even when those companies went broke, the fiber
| was still perfectly fine a decade later. In contrast, all those
| datacenters will be useless when the technology has advanced by
| just a few years.
|
| Nobody is going to be interested in those machines 10 years from
| now, no matter if the bubble bursts or not. Data centers are like
| fresh produce. They are only good for a short period of time and
| useless soon after. They are being constantly replaced.
| zkmon wrote:
| The only problem is, similarity with dotcom might only go thus
| far. For example, dotcom bubble itself might not have a similar
| thing in the past at that time. This is because the overall world
| context is different and interaction of social, political and
| economic forces is different.
|
| So, when people say something about future, they are looking into
| the past to draw some projections or similar trends, but they may
| be missing the change in the full context. The considered factors
| of demand and automation might be too few to understand the
| implications. What about political, social and economic
| landscape? The systems are not so much insulated to study using
| just a few factors.
| hansmayer wrote:
| More like Bullshit Era
| lilerjee wrote:
| What are the disadvantages of AI?
|
| The author didn't mention them.
|
| AI companies robbed so much data from the Internet free and
| without permission.
|
| Sacrificing the interests of owners of websites.
|
| It's not sustainable.
|
| It's impossible for AI to go far.
| cbdevidal wrote:
| I sometimes wonder, in a world where the data becomes
| overwhelmingly AI-generated, if AI starts feeding on itself, a
| copy of a copy of a copy.
| catlifeonmars wrote:
| We're already seeing this sort of well poisoning occur.
| 23434dsf wrote:
| HN is struggling to understand
| roommin wrote:
| Enlighten us.
| delegate wrote:
| In the dial-up era, the industry was young, there were no
| established players, it was all a big green field.
|
| The situation is far from similar now. Now there's an app for
| everything and you must use all of them to function, which is
| both great and horrible.
|
| From my experience, current generation of AI is unreliable and so
| cannot be trusted. It makes non-obvious mistakes and often sends
| you off on tangents, which consumes energy and leads to
| confusion.
|
| It's an opinion I've built up over time from using AI
| extensively. I would have expected my opinion to improve after 3
| years, but it hasn't.
| geon wrote:
| The LLM architectures we have now have reached their full
| potential already, so going further would require something
| completely different. It isn't a matter of refining the existing
| tech, whereas the internet of 1997 is virtually technologically
| identical to what we have today. The real change has been
| sociological, not technological.
|
| To make a car analogy; the current LLMs are not the early cars,
| but the most refined horse drawn carriages. No matter how much
| money is poured into them, you won't find the future there.
| ozgung wrote:
| > The LLM architectures we have now have reached their full
| potential already.
|
| How do we know that?
| efficax wrote:
| what we can say right now is that we've hit the point of
| diminishing returns and the only way we're going to get
| signicantly more capable models is through a technological
| advance that we cannot forsee (and that may not come for
| decades if it ever comes)
| polynomial wrote:
| Exactly. You're absolutely right to focus on that.
| tim333 wrote:
| You could see some potential modifications. Already some are
| multimodal. You'd probably want something to change the weights
| as time goes on so they can learn. It might be more steam
| engines needing to be converted to petrol engines.
| mkl wrote:
| Dial-up modems reached their full 56kbps potential in 1997, and
| going further required something completely different. It
| happened naturally to satisfy demand, and was done by many of
| the same companies and people; the change was technological,
| not sociological.
|
| I think we're probably still far from the full potential of
| LLMs, but I don't see any obstacles to developing and switching
| to something better.
| volkl48 wrote:
| I don't think that comparison works very well at all.
|
| We had plenty of options for better technologies both
| available and in planning, 56k modems were just the cost
| effective/lowest common denominator of their era.
|
| It's not nearly as clear that we have some sort of proven,
| workable ideas for where to go beyond LLMs.
| Enginerrrd wrote:
| The current generation of LLM's have convinced me that we
| already have the compute and the data needed for AGI, we just
| likely need a new architecture. But I really think such an
| architecture could be right around the corner. It appears to me
| like the building blocks are there for it, it would just take
| someone with the right luck and genius to make it happen.
| netdevphoenix wrote:
| > The current generation of LLM's have convinced me that we
| already have the compute and the data needed for AGI, we just
| likely need a new architecture
|
| This is likely true but not for the reasons you think about.
| This was arguably true 10 years ago too. A human brain uses
| 100 watts per day approx and unlike most models out there,
| the brain is ALWAYS in training mode. It has about 2
| petabytes of storage.
|
| In terms of raw capabilities, we have been there for a very
| long time.
|
| The real challenge is finding the point where we can build
| something that is AGI level with the stuff we have. Because
| right now, we might have the compute and data needed for AGI
| but we might lack the tools needed to build a system that
| efficient. It's like a little dog trying to enter a fenced
| house, the closest path topologically between the dog and the
| house might not be accessible for that dog at that point
| because its current capabilities (short legs, inability to
| jump high or push through the fence standing in between) so
| while it is further topologically, a longer path
| topologically might be the closest path to reach the house.
|
| In case it's not obvious, AGI is the house, we are the little
| dog and the fence represent current challenges to build AGI.
| Flashtoo wrote:
| The notion that the brain uses less energy than an
| incandescent lightbulb and can store less data than YouTube
| does not mean we have had the compute and data needed to
| make AGI "for a very long time".
|
| The human brain is not a 20-watt computer ("100 watts per
| day" is not right) that learns from scratch on 2 petabytes
| of data. State manipulations performed in the brain can be
| more efficient than what we do in silicon. More
| importantly, its internal workings are the result of
| billions of years of evolution, and continue to change over
| the course of our lives. The learning a human does over its
| lifetime is assisted greatly by the reality of the physical
| body and the ability to interact with the real world to the
| extent that our body allows. Even then, we do not learn
| from scratch. We go through a curriculum that has been
| refined over millennia, building on knowledge and skills
| that were cultivated by our ancestors.
|
| An upper bound of compute needed to develop AGI that we can
| take from the human brain is not 20 watts and 2 petabytes
| of data, it is 4 billion years of evolution in a big and
| complex environment at molecular-level fidelity. Finding a
| tighter upper bound is left as an exercise for the reader.
| netdevphoenix wrote:
| > it is 4 billion years of evolution in a big and complex
| environment at molecular-level fidelity. Finding a
| tighter upper bound is left as an exercise for the
| reader.
|
| You have great points there and I agree. Only issue I
| take with your remark above. Surely, by your own
| definition, this is not true. Evolution by natural
| selection is not a deterministic process so 4 billion
| years is just one of many possible periods of time needed
| but not necessarily the longest or the shortest.
|
| Also, re "The human brain is not a 20-watt computer ("100
| watts per day" is not right)", I was merely saying that
| there exist an intelligence that consumes 20 watts per
| day. So it is possible to run an intelligence on that
| much energy per day. This and the compute bit do not
| refer to the training costs but to the running costs
| after all, it will be useless to hit AGI if we do not
| have enough energy or compute to run it for longer than
| half a millisecond or the means to increase the running
| time.
|
| Obviously, the path to design and train AGI is going to
| take much more than that just like the human brain did
| but given that the path to the emergence of the human
| brain wasn't the most efficient given the inherent
| randomness in evolution natural selection there is no
| need to pretend that all the circumstances around the
| development of the human brain apply to us as our process
| isn't random at all nor is it parallel at a global scale.
| Flashtoo wrote:
| > Evolution by natural selection is not a deterministic
| process so 4 billion years is just one of many possible
| periods of time needed but not necessarily the longest or
| the shortest.
|
| That's why I say that is an upper bound - we know that it
| _has_ happened under those circumstances, so the minimum
| time needed is not more than that. If we reran the
| simulation it could indeed very well be much faster.
|
| I agree that 20 watts can be enough to support
| intelligence and if we can figure out how to get there,
| it will take us much less time than a billion years. I
| also think that on the compute side for developing the
| AGI we should count all the PhD brains churning away at
| it right now :)
| recursive wrote:
| "watts per day" is just not a sensible metric. watts
| already has the time component built in. 20 watts is a
| rate of energy usage over time.
| visarga wrote:
| > The current generation of LLM's have convinced me that we
| already have the compute and the data needed for AGI, we just
| likely need a new architecture.
|
| I think this is one of the greatest fallacies surrounding
| LLMs. This one, and the other one - scaling compute!! The
| models are plenty fine, what they need is not better models,
| or more compute, they need better data, or better feedback to
| keep iterating until they reach the solution.
|
| Take AlphaZero for example, it was a simple convolutional
| network, not great compared to LLMs, small relative recent
| models, and yet it beat the best of us at our own game. Why?
| Because it had unlimited environment access to play games
| against other variants of itself.
|
| Same for the whole Alpha* family, AlphaStar, AlphaTensor,
| AlphaCode, AlphaGeometry and so on, trained with copious
| amounts of interactive feedback could reach top human level
| or surpass humans in specific domains.
|
| What AI needs is feedback, environments, tools, real world
| interaction that exposes the limitations in the model and
| provides immediate help to overcome them. Not unlike human
| engineers and scientists - take their labs and experiments
| away and they can't discover shit.
|
| It's also called the ideation-validation loop. AI can ideate,
| it needs validation from outside. That is why I insist the
| models are not the bottleneck.
| wazoox wrote:
| There are some gross approximations in the comparison. Oversized
| fibre optics networks laid out in the late 90s were used for
| years and may even be in part still used today; today's servers
| and GPUs will be obsolete in 3 to 5 years, and not worth their
| weight in scrap metal in 10.
|
| The part about Jevons' paradox is interesting though.
| hufdr wrote:
| What makes this analogy great is that nobody in the dial up days
| could imagine Google or YouTube. We're in the same place now
| nobody knows who becomes "the Google of AI," and that uncertainty
| usually means a new platform is being born.
| Arn_Thor wrote:
| There is one key way in which I believe the current AI bubble
| differs from the TMT bubble. As the author points out, much of
| the TMT bubble money was spent building infrastructure that
| benefited us many decades later.
|
| But in the case of AI, that argument is much harder to make. The
| cost of compute hardware is astronomic relative to the pace of
| improvements. In other words, a million dollars of compute today
| will be technically obsolete (or surpassed on a performance/watt
| basis) much faster than the fiber optic cables laid by Global
| Crossing.
|
| And the AI data centers specialized for Nvidia hardware today may
| not necessarily work with the Nvidia (or other) hardware five
| years from now--at least not without major, costly retrofits.
|
| Arguably, any long-term power generation capacity put down for
| data centers of today would benefit data centers of tomorrow, but
| I'm not sure much such investment is really being made. There's
| talk of this and that project, but my hunch and impression is
| that much of it will end up being small-scale local power
| generation from gas turbines and the like, which is harmful for
| the local environment and would be quickly dismantled if the data
| center builders or operators hit the skids. In other words, if
| the bubble bursts I can't imagine who would be first in line to
| buy a half-built AI data center.
|
| This leads me to believe this bubble has generated much less
| useful value to benefit us in future than the TMT bubble. The
| inference capacity we build today is too expensive and ages too
| fast. So the fall will be that much more painful for the
| hyperscalers.
| innagadadavida wrote:
| One thing the analysis for textiles vs cars misses I the
| complexity of the supply chain and the raw materials / components
| that need to be procured to make the end product. Steel/textiles
| have simple supply chains and they went through a boom/bust cycle
| as the demand plateaued. But cars on the other hand will not go
| through the same pattern - there are too many logistical things
| that need to line up and the trust factor in each of those steps
| as well as the end product is quite high.
|
| Software is similar to cars - the individual components that need
| to be properly procured and put together is very complex and
| trust will be important - will you trust that you as a restaurant
| owner vibe coded your payment stack properly or will you just
| drop in the 3 lines to integrate with Stripe? I think most of the
| non-tech business owners will do the latter.
| baggachipz wrote:
| tl;dr: Gartner Hype Cycle[1]
|
| [1] https://en.wikipedia.org/wiki/Gartner_hype_cycle
| weare138 wrote:
| The last AI bubble was AI's dial-up era because it was the dial-
| up era:
|
| https://en.wikipedia.org/wiki/AI_winter
|
| https://www.youtube.com/watch?v=sV7C6Ezl35A
| polynomial wrote:
| Great comment threads here, but the OP article leans too much on
| AI generated text that is heavy on empty synthetic rhetoric and
| generative AI cliches.
| wosined wrote:
| The thing is that the average person now thinks AI is
| revolutionary. Thus, if you form the analogy correctly, then it
| tells us that the average person is wrong and that AI is NOT
| revolutionary. (I'm not arguing either case.)
___________________________________________________________________
(page generated 2025-11-04 23:01 UTC)