[HN Gopher] The Prospect of an AI Winter
       ___________________________________________________________________
        
       The Prospect of an AI Winter
        
       Author : erwald
       Score  : 87 points
       Date   : 2023-03-27 21:00 UTC (2 hours ago)
        
 (HTM) web link (www.erichgrunewald.com)
 (TXT) w3m dump (www.erichgrunewald.com)
        
       | blintz wrote:
       | I'm not an expert, but I see the main threat to continued
       | improvement as running out of high-quality data. LLM's are a cool
       | thing you can produce only because there is a bunch of freely
       | available high-quality text representing (essentially) the sum of
       | human knowledge. What if GPT-4 (or 5, or 6) is the product of
       | that, and then further improvements are difficult? This seems
       | like the most likely way for improvement to slow or halt; the
       | article cites synthetic data as a fix, but I'm suspicious that
       | that could really work.
        
         | mirekrusin wrote:
         | It's like worrying about running out of stables for horses when
         | cars already started to take over.
         | 
         | It's going to shift from requiring quality data to producing
         | quality data.
         | 
         | There is not going to be need for it just like there is no need
         | for more Kasparovs anymore.
         | 
         | You can see it with artists slowly happening.
         | 
         | Lawyers will get the hit (or great tool to use depending how
         | you want to see it).
         | 
         | Medical analysis will find the same faith. If you think that
         | medicine will require human analysis just imagine for a second
         | what closed loop AI could do - if it had access to data not
         | only from all hospitals but patients as well and being able to
         | munch through it continuosly. This together with constant
         | access to simulations and physical trials, finding patterns and
         | correlations on its own.
         | 
         | It's exciting and depressing to think that humans will
         | evantually be left to being humans the way they wish with
         | everything sorted out - just like chickens don't mind being
         | free range chickens.
         | 
         | There is no other direction it can go and it'll just go faster.
         | 
         | Our generation will see some mind blowing things.
         | 
         | Next generation will arrive at the world unlike anything else.
        
         | marcyb5st wrote:
         | Googler, but opinion are my own.
         | 
         | More than that, I believe we will hit a ceiling when the
         | impossibility of these models to incorporate causality becomes
         | evident.
         | 
         | Right now, LLMs are trained by predicting the next word given a
         | context (the prompt). This approach, IMHO, gives a resemblance
         | of cause/effect because the training data is made by humans and
         | obviously we are able to express ourselves and reason in those
         | terms. So we have a poor proxy for that which, also IMHO,
         | partially explains why LLMs performance degrades when asked to
         | solve novel problems (there was an entry few days ago about
         | this).
        
           | akiselev wrote:
           | After seeing LangChain and the reasoning paper I think it's
           | fairly obvious that we're just starting to scratch the
           | surface of AI architectures, dictated largely by scalability
           | of GPU resources. The LLMs we're playing with are at best the
           | proof of concept of what will eventually be the message
           | passing for the next generation of models.
        
         | brucethemoose2 wrote:
         | Even just finetuning these models, they will pick up on the
         | weirdest things a human wouldn't see, and artificial datasets
         | will contain all kinds of invisible artifacts that further
         | "inbreeding" is going to massively amplify.
         | 
         | I am not speaking speculatively either. I have seen it happen
         | finetuning ESRGAN on previous upscales that I even personally
         | vetted as "good," and these generative models are way more
         | sensitive than the old GANs.
        
         | zarzavat wrote:
         | That we are running out of data makes continued improvement
         | _more_ likely, in the medium /long term, because it means we
         | have found close to the maximum amount of static data that
         | models need to be trained on. This means that one of the
         | factors for compute requirements (static training data size)
         | has reached its upper bound _while still within our
         | computational capacity today_ , and further improvements must
         | come from elsewhere.
         | 
         | So yes, easy data scaling is coming to an end and may or may
         | not lead to a short term winter. But this also means that, all
         | being equal, training a model will be cheaper compared to a
         | situation where we still had orders of magnitude more data to
         | go.
        
         | qwertox wrote:
         | I've just spent a 4-5 hours session with ChatGPT trying to fix
         | a problem with `aiohttp`.
         | 
         | The specific problem was what while I'm in a WebSocket handler,
         | I cannot await on asynchronous generators which act like
         | `asyncio.sleep()` and send messages to the client while I have
         | `heartbeat` on that `WebSocketResponse` enabled (it sends pings
         | to the client and waits for pongs to see if the connection is
         | "alive"). The issue is that the incoming pong is not getting
         | processed so the client gets disconnected.
         | 
         | I struggled a lot with ChatGPT's help, but it was mostly
         | insightful, like rubberducking with a duck that does actually
         | try to help you.
         | 
         | Occasionally I went to Google to search for a very specific way
         | of solving the issue; for me Google is the gateway to Stack
         | Overflow, I never use Stack Overflow's search.
         | 
         | But I didn't find anything of value there. And earlier today I
         | noticed that I had unread messages in Stack Overflow, and when
         | I checked how many consecutive days I have been on SO: 1. I
         | used to be 100+ consecutive days on SO and lately I'm
         | struggling with using it as the to-go place for my programming
         | questions.
         | 
         | And this is where your comment comes in: I was thinking to
         | myself what is going to happen if more people move away from
         | Stack Overflow, that place is a goldmine of programming
         | information, who will feed it?
         | 
         | One thing I really hope is that OpenAI adds some "modes", like
         | what were poised to expect from Copilot X, where we get
         | different layouts for the webpage, so that we can choose a
         | "programmer mode" which actually lets us give proper feedback
         | in the sense that "this code worked", "that one didn't".
         | 
         | There's already a feedback, but it's not task-oriented. A
         | programmer can submit to it as well as a cook, the result won't
         | go into a "database of usable code" which ideally would be
         | publicly accessible or even get formed into a Question/Answer
         | pair which is prepared to be submitted to Stack Overflow.
        
           | mirekrusin wrote:
           | It may become a norm to write AI QAs instead of docs for
           | projects. Maybe similar to sitemaps for web scrapers. But
           | even this will not be necessary at some point.
           | 
           | Stack Overflow will degrade to single search box. And we'll
           | all love it.
        
         | giardini wrote:
         | I see no lack of high-quality data, although it may not be the
         | kind _you_ prefer. How about the telephone /chat/internet logs
         | (not "metadata") of _every single person in the USA_? We 've
         | got it! And how about using that to answer social science
         | questions and, more immediately, political questions such as:
         | 
         | - What really happens in corporate/government/private decision-
         | making?
         | 
         | - Did a particular politician really trade his vote for money?
         | How was that done?
         | 
         | - Are elected judges better/more honest than appointed judges?
         | 
         | - Are there hidden organizations within particular governmental
         | entities? e.g., a right-leaning group within the FBI who
         | secretly persecutes persons/groups of other political
         | persuasions? Are there independent entities within the CIA that
         | are capable of financing and operating on their own under the
         | umbrella of the government but also capable of evading the
         | congressional oversight specified in the Constitution?
         | 
         | - Is my wife's cousin really screwing Hunter Biden again?
        
       | anonzzzies wrote:
       | AI winter will arrive if we don't get the models to depend on
       | 'just' more training data to get better. There is no more
       | training data. We need models the same or better than gpt-4 but
       | trained on roughly what an 18 year old >100 iq human would digest
       | to get to that point. Which is vastly less than what gpt 4 gets
       | fed.
       | 
       | If advancing means ever larger and more expensive systems and
       | ever more data, we will enter a cold winter soon.
        
         | stevenhuang wrote:
         | You are confusing the entirety of all text corpus to _all
         | data_.
         | 
         | The next step is multi-modal data. Think videos, sounds, touch,
         | etc.
         | 
         | An 18 year old in fact "trained" on much more input data than
         | GPT4, orders of magnitudes more, considering the complete
         | informational content of our 5 senses and implicit structure of
         | our brains "learned" through evolution.
         | 
         | Not to mention recent research in LLM scaling (chinchilla)
         | reveals how our current models are undertrained and over-
         | parameterized.
         | 
         | We have not plateaued yet.
        
           | PaulDavisThe1st wrote:
           | > The next step is multi-modal data. Think videos, sounds,
           | touch, etc.
           | 
           | Remember that the data _must be annotated_. This puts limits
           | on what can be usefully ingested.
        
         | pixl97 wrote:
         | There is nearly an unlimited amount of training data. As of so
         | far models have been eating up text. We still have sound,
         | image, video, temperature, gravimetrics, and other sources that
         | we can feed multimodal models. And that's not even including
         | training models from learning from the world itself.
        
           | akiselev wrote:
           | _> gravimetrics_
           | 
           | Calling it now: humanity will seal its fate the day we hook
           | up LIGO to ChatGPT and it gets corrupted by gravitational
           | waves from the beginning of time when the Great Old Ones
           | freely roamed the universe.
        
           | anonzzzies wrote:
           | Sound, image and video we'll ravish through (already
           | happening); is temp and gravimetrics valuable? (I don't know;
           | it's a question) Games might be a source too. But yes, the
           | world itself is a good source. So maybe that won't freeze it
           | over; computing power/energy?
        
             | pixl97 wrote:
             | >computing power/energy?
             | 
             | Energy is "probably" not a big deal. If you're looking at
             | long term oversupply from green energy sources, it's
             | probably not hard to train with bursty, but very cheap
             | power like this.
             | 
             | Compute is currently the biggest limitation, and will
             | remain so far a long time, as long as scaling continues.
        
       | boringuser1 wrote:
       | [dead]
        
       | stephc_int13 wrote:
       | The current expectations around AI are extremely high and frankly
       | quite a few of them are bordering into speculative territory.
       | 
       | That said, I don't think we're going to see a new AI winter
       | anytime soon, what we're seeing is already useful and potentially
       | transformative with a few iterative improvements and
       | infrastructure.
        
       | zitterbewegung wrote:
       | I think many technologies go from spring , summer and eventually
       | winter. The last one focused on good old fashioned AI . The next
       | one was big data with ML and this one is large language models.
        
       | endisneigh wrote:
       | I think the AI winter will come, but not for why the author
       | asserts (quality, reliability, etc.).
       | 
       | I think the current crop of AI is good enough. It will happen
       | because people will actually grow resentful of things that AI can
       | do.
       | 
       | I anticipate a small, yet growing segment of populations
       | worldwide to start minimizing internet usage. This, will result
       | in fewer opportunities for AI to be used and thus the lack of
       | investment and subsequent winter.
        
       | HarHarVeryFunny wrote:
       | Let's not forget that GPT-4 was finished over 6 months ago, with
       | OpenAI now presumably well into 4.5 or 5, and Altman appearing
       | confident on what's to come ...
       | 
       | In the meantime we've got LangChain showing what's possible when
       | you give systems like this a chance to think more than one step
       | ahead ...
       | 
       | I don't see an AI winter coming anytime soon... this seems more
       | like an industry changing iPhone or AlexNet moment, or maybe
       | something more. ChatGPT may be the ENIAC of the AI age we are
       | entering.
        
       | [deleted]
        
       | mpsprd wrote:
       | >"[Which] areas of the economy can deal with 99% correct
       | solutions? My answer is: ones that don't create/capture most of
       | the value."
       | 
       | The entertainment industry disagrees with this.
       | 
       | These systems are transformative for any creative works and in
       | first world countries, this is no small part of the economy.
        
       | karmasimida wrote:
       | Previous AI winters are due to overpromise and no delivery: Big
       | promise of what AI can do, but never able to actually even
       | deliver a prototype of that in reality.
       | 
       | ChatGPT, at least for GPT-4, can already be considered as someone
       | coined, baby AGI. It is already practical and useful, so it HAS
       | to be REAL.
       | 
       | If it is already REAL, there is no need for another winter to
       | ever come to reap the heads of liars. Instead AI will become
       | applied technology, like cars, like chips. It will evolve
       | continuously, and never go away.
        
         | PaulDavisThe1st wrote:
         | How old were you when expert systems arrived on the scene?
        
         | [deleted]
        
       | collaborative wrote:
       | > cheap text data has been abundant
       | 
       | The winter before the AI winter will consist in all the cheap
       | data disappearing. What fun will it be to write a blog post so
       | that it can be scraped by a bot and regurgitated without
       | attribution? Dito for code
       | 
       | Or, how will info sites survive without ad revenue? Last I
       | checked bots don't consume ads
       | 
       | When the internet winter comes, all remaining sites will be
       | behind login screens and a strict ToS popup
        
       | brucethemoose2 wrote:
       | I think the hardware/cost factor is also a business one, eg how
       | dominant does Nvidia stay in the space.
       | 
       | If they effectively shut out other hardware companies, that is
       | going to slow scaling and price/perf reduction.
        
         | pixl97 wrote:
         | At some point it's going to be difficult to shut everyone else
         | out. Intel tried this a long time ago, and while they
         | maintained dominance, they were not able to shut out
         | competitors completely, and experienced a lot of legal battles
         | over it.
         | 
         | Foreign nations aren't going to be happy about one mostly US
         | company holding all the cards too.
        
       | atleastoptimal wrote:
       | More so that there's a 5% chance there won't be a human winter in
       | the next 5 years
        
       | macawfish wrote:
       | People love talking about AI winter.
        
       | skybrian wrote:
       | Sometimes unreliability can be worked around with human
       | supervision. You wouldn't normally want to write a script that
       | just chooses the first Google result, but that doesn't mean
       | search engines aren't useful. The goal when improving a search
       | engine is to put some good choices on the first page, not
       | perfection, which isn't even well-defined.
       | 
       | The AI generators work similarly. They're like slot machines,
       | literally built on random number generators. If you don't like
       | the result, try again. When you get something you like, you keep
       | it. There are diminishing returns to re-running the same query
       | once you got a good result, because most results are likely to be
       | worse than what you have.
       | 
       | Randomness can make games more fun at first. But I wonder how
       | much of a grind it will be once the novelty wears off?
        
         | Karrot_Kream wrote:
         | I think that will be the hard part of productionizing these
         | models. Some domains absolutely need high reliability, like
         | AVs, and applying these ML models is fraught because of how
         | imprecise their outputs are. Problem domains where unreliable
         | results are useful either through human mitigation (e.g. expert
         | search engines, customer service chatbots) or because they're
         | just not that reliable anyway (like image generation) will be
         | the domains where these models will have sticking power.
        
         | christkv wrote:
         | The fact that they produce valuable content is more a
         | reflection of the mediocre output of many people.
         | 
         | I hold it up against the response I would expect from a human
         | and in a lot of cases I'm not sure the human wins.
        
       | ahofmann wrote:
       | I think the assumption that companies are willing to spend 10
       | billion dollars on AI training is unrealistic. Even the biggest
       | companies would find such an investment to be a financial burden.
        
         | pixl97 wrote:
         | You're "probably" right, but it's not something I'm going to
         | place bets on with the level of uncertainty. When you have some
         | companies still sitting on war chests of tens of billions of
         | dollars, and almost nothing to spend it on, not investing in AI
         | is a risk in itself. If your competitor succeeds they may
         | rapidly take parts of your market while you now attempt to
         | reproduce their work.
         | 
         | Also if these 10 billion dollar models are 'AGI' level, then
         | your model pays for itself if you can find enough interesting
         | work to throw at it.
        
       | numinary1 wrote:
       | Being old as dirt, my observation is that potential tech
       | revolutions take ten years after the initial exuberance to be
       | realized broadly, or three to five years to fizzle. Of those that
       | fizzle, some were bad ideas and some were good ideas replaced by
       | better ideas. FWIW
        
         | mirekrusin wrote:
         | And how long did it take for those tech revolutions to gain
         | first 100M users?
        
           | PaulDavisThe1st wrote:
           | Well, during the initial "tech" (broadly interpreted to mean
           | computational processes that somehow involve the internet)
           | revolutions, there were not 100M, nor even 1M users to be
           | gained.
           | 
           | And what is a "user" when it comes to ChatGPT and it's ilk?
           | How does that compare with the definition of a user of, say,
           | the web? Or of SMS ?
        
       | christkv wrote:
       | We are all ready using GPT 4 for a ton of BS documents we have to
       | write for our planning permission and other semi legal paper
       | work.
       | 
       | My lawyer has been doing pretty much every public filing for
       | civil cases and licenses assisted by GPT. So much bureaucracy
       | could probably be removed by just having GPT validated
       | permissions and manage the correctness of the submissions leaving
       | a human to rubber stamp the final result if at all.
        
       | mikewarot wrote:
       | It is entirely possible that Moore's Law gets assassinated by
       | supply chain destruction as deglobalization continues.
       | 
       | There are too many single source suppliers in the chain up to EUV
       | lithography. We may in fact be at peak IC.
        
       | beepbooptheory wrote:
       | All this stuff can't transform anything if you can't afford to
       | keep the computer on. Which is really, to me, the bigger/most
       | convincing point in the thread this article links at the top.
       | 
       | If there _isn 't_ a winter, will ChatGPT et al be able solve the
       | energy crises they might be implicated in? Is there something in
       | its magic text completion that can stop global warming? Coming
       | famines?
       | 
       | Is perhaps the fixation on these LLMs right now, however smart
       | and full of Reason they are, not paying the fullest attention to
       | the existential threats of our meat world, and how they might
       | interfere with what ever speculative dystopia/utopia we can
       | imagine at the moment?
        
       | Havoc wrote:
       | > reliability
       | 
       | Humans are unreliable AF and we employ them just fine. Better
       | reliability would certainly be nice but I don't think it is
       | strictly speaking necessary
        
       | gumby wrote:
       | The "winter" analogy (I remember the AAAI when marvin made that
       | comment) was to the so-called "nuclear winter" that was widely
       | discussed at the time: a devestating pullback. It did indeed come
       | to pass. I don't see that any time soon.
       | 
       | I think the rather breathless posts (which I also remember from
       | the 80s and apparently used to be common in the 60s when
       | computers just appeared) will die down as the limits of the LLMs
       | become more widely understood, and they become ubiquitous where
       | they make sense.
        
         | unsupp0rted wrote:
         | Unless of course the limits of LLMs are just outside the bounds
         | of LLMs being able to write their own successors.
         | 
         | In which case progress becomes unlimited and there's never an
         | AI winter again.
        
       | muyuu wrote:
       | idk if there will be that much of a winter, but i would welcome
       | it
       | 
       | in the late 90s and early 2000s, neural network had a significant
       | stigma for being dead ends and were unpromising grads were sent -
       | people didn't want to go there because it was a self-fulfilled
       | prophecy that if you went to research ANNs then you were a loser,
       | and you were seen as such, and in academia that is all you need
       | to be one
       | 
       | but, in real life, they worked
       | 
       | sure, not for everything because of hardware limitations among
       | other things, but these things worked and they were a useful
       | arrow in your quiver as everybody else just did whatever was
       | fashionable at the time (simulated annealing, SVMs, conditional
       | random fields, you name it)
       | 
       | hype or no hype, if you know what you are doing and the stuff you
       | do works, you will be okay
        
       | karmasimida wrote:
       | There will never be another winter moving forward.
       | 
       | ChatGPT as is, is already transformative. It CAN do human level
       | reasoning really well.
       | 
       | The only winter I can see, is the AI gets so good, there is
       | little incentive to improve upon it.
        
         | throwbadubadu wrote:
         | The dissonance what people see in it is really gross.. maybe
         | I'm too stupid, but session from simple to more complicated
         | problems have all shown me: It is definitely not reasoning at
         | all, it doesn't understand anything.. it is a different version
         | of stackoverflow that sometimes gets you quicker to target, but
         | for me even more often not.
         | 
         |  _shrug_
        
         | [deleted]
        
       | totoglazer wrote:
       | Another big concern will be regulatory. It seems unlikely a
       | couple billion people whose livelihood is significantly impacted
       | will just chill as it happens?
       | 
       | I think it's unlikely, but no less likely than the compute issues
       | mentioned.
        
         | knodi123 wrote:
         | So the research and development will just move over to a
         | neighboring country?
         | 
         | This isn't like the manhattan project. There are _lots_ of
         | people who know how to make this stuff, and they don 't need
         | rare volatile elements - just consumer hardware.
        
           | pixl97 wrote:
           | It depends if were talking 'Butlerian Jihad' levels of
           | disruption here.
           | 
           | And no, the big models require exabytes of processing power
           | and time, so at least at the most extreme scales if nations
           | started punching missiles in processor factories and data
           | centers you'd slow top end AI projects way down.
        
           | flangola7 wrote:
           | If it seems dire enough lethal military force might be used.
           | 
           | If Russia and China think the US is about to have AI capable
           | of absolute total global domination they may launch a
           | preemptive strike. Maybe hypersonic cruise missiles at
           | datacenters, maybe a full EMP or nuclear launch.
           | 
           | (Swap countries around as desired.)
        
         | bloodyplonker22 wrote:
         | Isn't this a "concern" whenever a new technology comes out? ie:
         | the internet? Yet, due to how slowly government moves and how
         | hard new technology is to understand for governments, it is
         | barely a concern.
        
         | HervalFreire wrote:
         | The US is very capitalistic. You may get regulation in France,
         | but not the US.
         | 
         | The Corporate desire for profit will overshadow the livelihood
         | of billions. It's been this way in the US since forever. Look
         | what happened to the corporation that caused the Opioid
         | epidemic. Nothing, they profited.
        
           | philwelch wrote:
           | If that was true, the US would be producing orders of
           | magnitude more nuclear energy than it is today. In reality
           | many sectors of the US economy, such as housing, are utterly
           | crippled by regulation.
        
       | chess_buster wrote:
       | Write a counterpoint to the article posted. Your goal is to
       | refute all claims by giving correct facts with references. Cite
       | your sources. Make it 3 paragraphes. As a poem. In Klingon.
        
       | ChatGTP wrote:
       | AI Winter will likely come because we've not addressed climate
       | change...instead of blowing billions / trillions on our survival,
       | we're yet again blowing it on moonshots. We have the brains
       | collectively, already to solve the problems should we _want to_ ,
       | we don't because that's not where "the money" is.
       | 
       | Silicon Valley Tech is already promising that AI will be the
       | likely solution to climate change..., if there is any more
       | disruption to the economy it's just going to yet again slow down
       | mitigation steps for climate change, thus having negative affects
       | on the amount of capital available for these projects.
       | 
       | Printing money works, until it doesn't.
        
       | nico wrote:
       | At the same time this AI revolution is happening, there is also a
       | psychedelic revolution happening.
       | 
       | When this happened in the 60s-70s, the psychedelic revolution was
       | crushed by the government. And we entered an AI winter.
       | 
       | I'm not implying causation. Just pointing out a curious
       | correlation between the two things.
       | 
       | I wonder what will happen now.
        
         | DennisAleynikov wrote:
         | I really hope the government does not intervene in this process
         | but I know for a fact they will want to.
        
           | nico wrote:
           | You are correct. Today on HNs front page there was an article
           | talking about how the EU has already said they are going to
           | regulate AI.
        
       | crop_rotation wrote:
       | I think scaling limits and profitability are the only things that
       | can stop the march of the AI. The utility is already there and
       | even the current GPT4 utility is revolutionary.
        
       | HervalFreire wrote:
       | It's different this time. Because this time AI is hugely more
       | popular in the public and corporate sphere. The previous AI
       | winters were more academic winters with few people pushing the
       | envelope.
       | 
       | I don't think compute is the issue. It's an issue with LLMs.
       | Current LLMs are just a stepping stone for true AGI. I think
       | there's enough momentum right now that we can avoid a winter and
       | find something better through sheer innovation.
        
         | pixl97 wrote:
         | I think the difference is AI takes data and in the past we just
         | didn't have much data.
         | 
         | Now the vast majority of the worlds population has a cellphone
         | and internet service, and use services that AI can
         | improve/affect.
        
       | stuckinhell wrote:
       | I strongly disagree. ChatGPT is bleeding into everything.
       | Midjourney is too damn good see the example below.
       | 
       | The avengers if they had 90's actors is going viral.
       | 
       | https://cosmicbook.news/avengers-90s-actors-ai-art
       | 
       | Also the avengers as a dark sci fi
       | https://www.tiktok.com/@aimational/video/7186426442413215022
       | 
       | AI art and generative text is just astounding, and it's only
       | getting better.
        
       | RandomLensman wrote:
       | After the massive hype around generative AI, seems likely there
       | will be an AI winter when the promised transformation in many
       | business areas just doesn't happen as advertised.
        
         | pixl97 wrote:
         | I remember the hype cycle around this thing called the
         | "internet" back in the day. People said it was going to take
         | over the world, even though back then it was slow and kinda
         | sucked.
         | 
         | And then it did.
        
           | JohnFen wrote:
           | The internet was well-established and its worth proven many
           | times over before it was ever open to the public, so well
           | before the hype cycle happened.
        
           | RandomLensman wrote:
           | Do you remember all the failed predictions, too?
        
           | palata wrote:
           | > And then it did.
           | 
           | After the dotcom bubble exploded. Can't we call that a
           | winter?
        
             | pixl97 wrote:
             | What metrics do you want to go by? At least by internet use
             | and user growth the dotcom bubble still saw massive amounts
             | of new user growth and online time by users.
             | 
             | Was there a massive reduction in completely untenable
             | .com's? But I'm not exactly sure if that's the definition
             | of a winter, plenty of other internet based businesses did
             | fine and kept growing in that time.
        
         | ffhhj wrote:
         | > doesn't happen as advertised.
         | 
         | But there will be a small transformation:
         | 
         | * More busy work will be automated taking away some of the fun
         | and leaving the harder tasks thus making jobs shittier, pushing
         | workers to find other kinds of work.
         | 
         | * More solutions on AI looking for problems will be implemented
         | increasing the speed of success/failure, and from the lottery
         | winner effect we can expect "the granny that created an AI
         | solution and earned millions" that other devs will follow and
         | fail leaving debts.
        
           | brazzy wrote:
           | > More busy work will be automated taking away some of the
           | fun and leaving the harder tasks thus making jobs shittier
           | 
           | Since when is busy work fun and hard tasks shitty?
        
         | Daishiman wrote:
         | A disappointment in the hype cycle isn't equivalent to AI
         | winter. The hype cycle always, eventually, hits a trough, even
         | with wildly successful products.
         | 
         | It's very likely we'll be disappointed with AI in many oversold
         | contexts (I share the sentiment about self-driving cars), but
         | it can't be denied that ChatGPT is a product that's being used
         | _right now_ to massive success and still has quite a ways to
         | go.
        
           | RandomLensman wrote:
           | It doesn't go into large value decisions and projects in a
           | meaningful way (pretty much like all AI) - that is the
           | critical thing AI needs to get to. Where was AI when
           | guaranteeing SVB's deposits was decided, for example?
        
             | Hermitian909 wrote:
             | > It doesn't go into large value decisions and projects in
             | a meaningful way (pretty much like all AI) - that is the
             | critical thing AI needs to get to
             | 
             | Large decisions such as SVB deposits are literally the last
             | thing that will be given to AI, it's the most important
             | part of the business. That doesn't mean AI is useless or is
             | not being incorporated into valuable parts of businesses.
             | 
             | I'm at a large tech company, LLMs are already entering some
             | of our key product workflows. It massively lowers the floor
             | for a number of product features e.g. recommendation
             | systems.
             | 
             | In a more general sense, many features that were previously
             | too difficult to expose (e.g. simple coding commands) to
             | business users are now on the table as long as we give them
             | limited access to an LLM. The current hype train means that
             | we don't even need to do much user education.
        
               | RandomLensman wrote:
               | I am not claiming AI is useless. Rather that pushing AI
               | too hard will create disappointment as it isn't just
               | going into every part of businesses with the same speed
               | (or at all for still some time on some things).
        
           | kenjackson wrote:
           | Also can I add that I hate the comparison to self-driving
           | cars. The issue with self-driving cars is the penalty for a
           | mis-step can be huge, even if doing something mundane, e.g.,
           | driving home from the neighborhood grocery store.
           | 
           | If crashing a self-driving car required me to spend 5 minutes
           | rejiggering it, I'd probably use the feature a lot. With
           | generative models there's usually very little cost to it
           | making a mistake -- and even better, I can determine when the
           | cost is likely to be high and modify my behavior.
           | 
           | Self-driving cars do seem much better than they used to be.
           | But I have no interest in using them until they get better
           | still. I don't have this same bar with LLMs or DALL-Es. And I
           | think this will contribute to more continuous improvement in
           | the technology.
        
       | superb-owl wrote:
       | We're only just seeing expectations for the tech inflate now. VCs
       | will probably pump money into LLM-related companies for at least
       | a couple years, and it'll be a couple years after that before
       | things really start to decline.
       | 
       | It's late spring right now, a strange time to start forecasting
       | winter.
        
       | codelord wrote:
       | IMHO the prospect of an AI winter is 0%. As someone who has done
       | research in ML I think ML technology is moving forward much
       | faster than we anticipated. ChatGPT shouldn't work based on what
       | we knew. It's incredible that it works. Makes you think what
       | other things that shouldn't work we should scale up and see if
       | they would work. And then there are things that we think they
       | should work. Each new paper or result opens the door for many
       | more ideas. And there are massive opportunities in applying what
       | we already have to all industries.
       | 
       | You can absolutely build high precision ML models. Using a
       | transformer LM to sum numbers is dumb because the model makes
       | little assumptions about the data by design, you can modify the
       | architecture to optimize for this type of number manipulation or
       | you can change the problem to generating code for summing values.
       | In fact Google is using RL to optimize matmul implementations.
       | That's the right way of doing it.
        
         | dragonwriter wrote:
         | > As someone who has done research in ML I think ML technology
         | is moving forward much faster than we anticipated.
         | 
         | The past AI winters have been preceded by periods of AI moving
         | forward faster than anticipated. Its when the limits of easy
         | advancement with the current approaches are reached without a
         | new approach that allows continued rapid progress you get an AI
         | winter.
        
           | [deleted]
        
       | greatwave1 wrote:
       | Can anyone give some color on to what extent advancements in AI
       | are limited by the availability of compute, versus the
       | availability of data?
       | 
       | I was under the impression that the size and quality of the
       | training dataset had a much bigger impact on performance versus
       | the sophistication of the model, but I could be mistaken.
        
         | gsatic wrote:
         | It's like a calf being born. It gets up and starts walking.
         | Pretty amazing. Mesmerises everyone. The model contains
         | everything it needs to know, to walk. But it's not going to
         | dance nor is it capable of working out how to dance.
         | 
         | Babies do something completely different. They can't walk when
         | born. Their model is to blunder about and work things out,
         | building the model up thro a can I do this - can I do that -
         | why not etc. Its only through this doing learning happens.
         | 
         | We have calf ai right now..you ask the calf what do you want to
         | learn next or what are you curious about and you get to see how
         | dumb it is.
        
         | jacobn wrote:
         | Both matter, and returns fall off as you go further in one but
         | not the other. The Chinchilla paper[0] established a simple
         | scaling law for Large Language Models: model size and training
         | tokens should grow at the same pace.
         | 
         | Compute is then proportional to the product of model size &
         | data quantity.
         | 
         | That said, _quality_ of data also matters a lot - OpenAI has
         | had human labelers produce the data for their Reinforcement
         | Learning from Human Feedback (RLHF), which has probably had a
         | disproportionate impact on the success of ChatGPT compared to
         | previous models, but that data is probably O(1%) of what they
         | trained on.
         | 
         | At this point I'm guessing OpenAI are limited by both data &
         | compute. Rumor has it they're training the "next big thing" now
         | and it won't finish until December. If they had more compute
         | they could presumably finish sooner, and if they had more data
         | they would presumably let it train longer.
         | 
         | [0] https://arxiv.org/abs/2203.15556
        
           | pixl97 wrote:
           | Also at this point, very few of the biggest players are going
           | to tell us anything about which matters most and those fine
           | tuning numbers can represent a huge strategic advantage.
           | Forcing your competitors to spend billions in hardware and
           | time can put you far ahead of them quickly, at least at our
           | current rate.
        
         | wsgeorge wrote:
         | AFAIK it's still an active area of research, and evidence from
         | Meta AI [0] suggests that size and quality of data can let
         | smaller (not necessarily less sophisticated) models do amazing
         | things.
         | 
         | But a lot of the advancements we're seeing right now _are_ the
         | result of more sophisticated models [1], and one person is
         | doing some interesting work [2] around achieving transformer-
         | level performance with other architectures.
         | 
         | So it's not completely settled if more data is the answer. But
         | it has a significant impact.
         | 
         | [0] https://ai.facebook.com/blog/large-language-model-llama-
         | meta...
         | 
         | [1]
         | https://en.wikipedia.org/wiki/Transformer_(machine_learning_...
         | 
         | [2] https://github.com/BlinkDL/RWKV-LM
        
       | kromem wrote:
       | I increasingly think we're underestimating what's ahead.
       | 
       | Two years ago was an opinion piece from NIST on the impact
       | optoelectronics would bring specifically to neural networks and
       | AGI, and watching as nearly every major research institution has
       | collectively raised probably half a billion for AI photonics
       | plays through their VC partnerships or internal resource
       | allocations on the promise of order of magnitude improvements
       | much closer than something like quantum computing, I think we
       | really haven't seen anything yet.
       | 
       | We're probably just at the very beginning of this curve, not
       | approaching its diminishing returns.
       | 
       | And that's both very exciting and terrifying.
       | 
       | After decades in tech (including having published a prediction
       | over a decade ago that mid 2020s would see roles shift away from
       | programming towards emergence of specialized roles for knowing
       | how to ask AI to perform work in natural language) I think this
       | is the sort of change so large and breaking from precedent we
       | really don't know how to forecast it.
        
       | nuancebydefault wrote:
       | So. The article starts with "I give it an estimate of 5 per cent
       | chance..." and then explains: what if...
       | 
       | Is this case really worth exploring? Or was the article written
       | by a bored AI?
       | 
       | I find it striking that there are still so many people
       | downplaying the latest developments of AI. We all feel that we
       | are at the verge of a next revolution on par or even greater than
       | the emergence of the www, while some people just can't to seem to
       | let it sink in.
        
         | noncoml wrote:
         | I taught my dog to bark twice when I ask him how much is one
         | plus one, and thrice when I ask him one plus two. Doesn't mean
         | my dog is "intelligent"
        
         | JohnFen wrote:
         | > while some people just can't to seem to let it sink in.
         | 
         | Just because people may have opinions different from yours
         | doesn't mean they're denying reality. They just have a
         | different opinion.
         | 
         | The hard, cold truth is that nobody knows the future. Everybody
         | is just guessing.
        
           | nuancebydefault wrote:
           | Yes that is the truth,nobody knows the future. But we you see
           | something coming right at us, why still so much doubt?
        
             | andsoitis wrote:
             | What are the top 3 things you're seeing come right us?
             | 
             | Bonus points for things that should cause us to make major
             | adjustments (otherwise it doesn't seem worth it to expend
             | too much energy).
        
             | johnfn wrote:
             | A lot of people saw crypto coming right at us as well.
        
               | nightski wrote:
               | Crypto is a $1T industry even with all the hate it gets
               | on HN.
        
               | JohnFen wrote:
               | So?
               | 
               | Cryptocurrency (or even blockchain) has not (yet) changed
               | the world. That was what everyone was predicting.
        
               | reidjs wrote:
               | Yes it has, I see bitcoin ATMs all over the place and
               | know that tons of people exchange cryptocurrencies daily
               | for fiat and for other cryptocurrencies. A lot of people
               | use it to gamble and buy drugs online. Some people are
               | using it as a monetization scheme for their artwork
               | (NFTs). The actual impact of those things is pretty
               | small, but none of this was possible before their
               | invention, so they literally changed the world, for
               | better or worse (mostly worse).
        
               | nuancebydefault wrote:
               | It is easy to find such examples. A lot of people see
               | aliens in the skies. A lot of people buy into get rich
               | quick schemes.
        
               | [deleted]
        
               | palata wrote:
               | That doesn't help your case :). Your intuition is not
               | enough to say it's coming right at us. So where is the
               | data? If there is no data, then it's just an intuition.
        
               | nuancebydefault wrote:
               | Even with or without data, every prediction is intuition.
               | Data, if you need them, are the current events. But
               | remember the quote 'Lies, damn lies and statistics'. It
               | is the interpretation of the data that is what counts and
               | that interpretation is partly subjective. Otherwise we
               | would not be having this rather fierce but interesting
               | conversation in the first place.
        
               | palata wrote:
               | Well, it can be more or less backed by facts. That's the
               | difficult part: making the difference between what seems
               | like a reasonable prediction and what is merely a belief.
               | 
               | Climate change is coming at us. That's still a prediction
               | (it hasn't happened yet), but not believing in it right
               | now is irrational and dangerous.
               | 
               | I don't think we can say that for AI. That AI will change
               | the world is a belief. Doesn't mean that it won't.
        
               | wsgeorge wrote:
               | Crypto wasn't good enough as a currency, so what was
               | coming in terms of that use case didn't pan out en masse.
               | But it did work to an extent for some people.
               | 
               | Crypto's value fluctuation made more sense as a
               | speculative asset class, and it clearly saw most of its
               | traction there.
               | 
               | It's potential as a currency did lead to banks, financial
               | institutions and governments taking a hard look at it.
               | And CBDC's are an area of active investigation. So the
               | hyper-optimists may have been off on some of their
               | predictions, but something came out of it.
               | 
               | AGI-agencent tools are more grounded. They don't threaten
               | to tear down existing entrenched institutions. They do
               | one thing simple: commoditize and scale some level of
               | intelligence. And they're pretty good at it, hence the
               | hype.
               | 
               | Unless humanity develops a deep distaste for many things
               | AI-made, the fascination will wear off and we'll be left
               | with the cold hard truth of raw productivity increase and
               | digital production on an unprecedented scale.
        
               | JohnFen wrote:
               | > AGI-agencent tools are more grounded. They don't
               | threaten to tear down existing entrenched institutions.
               | 
               | I disagree with this part. (I disagree with calling this
               | tech "AGI-adjacent", too, but that's not the part I'm
               | calling out here).
               | 
               | I'm not sure what you mean by "more grounded", but if you
               | mean "more based in reality", I don't think this is
               | clear. But, if what the evangelizers say is correct, it
               | absolutely threatens to tear down entrenched
               | institutions. It even threatens to tear down society
               | itself by destroying what little trust remains.
        
               | wsgeorge wrote:
               | By "grounded", I mean the promises of crypto vs the
               | promises of AI.
               | 
               | IIRC, crypto was a little too idealistic in the promise
               | of providing a digital currency without needing a
               | central, trusted authority to back it. From my
               | perspective, AI as we've seen it in the last few months
               | simply provides a tool that can automate intelligent
               | tasks easily.
               | 
               | I think the distinction is fuelled more by the companies
               | and leading figures pushing the latter. OpenAI's
               | stewardship, concerns for safety and generally
               | downplaying how amazing these tools are while not
               | ignoring the real effects they could have make AI sound
               | more "grounded" than peak crypto was. Or it could just be
               | the folks I pay more attention to.
               | 
               | > I disagree with calling this tech "AGI-adjacent", too,
               | but that's not the part I'm calling out here
               | 
               | I think this rests mostly on the definition of "General"
               | here. Here, I'm talking about LLMs as general task
               | performers, as opposed to models created for more
               | specific tasks. LLMs have proven to be more general
               | purpose, which I'd argue makes them closer to the AGI
               | ideal than, say, a sentiment analysis model.
        
               | JohnFen wrote:
               | > By "grounded", I mean the promises of crypto vs the
               | promises of AI.
               | 
               | Ah, I see. By that definition, it seems to me that
               | cryptocurrency was actually more grounded than gpt.
        
             | palata wrote:
             | Because it is not clear if it is coming right at us or not,
             | precisely.
             | 
             | Have you ever looked at predictions from big consulting
             | companies like McKinsey and the likes? "This new field with
             | unlock a market of 30 billions in the next 5 years", that
             | kind of crap?
             | 
             | In the fields where I worked, I used to take those
             | predictions seriously (after all, they know better than I
             | do, right?). What I realized is that they just don't have a
             | damn clue. I can't blame them for failing: predicting the
             | future is an unsolved problem. But making that kind of
             | money by telling that kind of crap should honestly not be
             | legal.
             | 
             | All that to say, some things are clearly coming right at us
             | (ahem, climate change), some aren't (AI, for instance).
        
               | nuancebydefault wrote:
               | Could it be that the events at work have been coloring
               | your perception by said consulting companies? Such
               | companies are getting paid for making bold predictions, I
               | would think. Of course Openai and MS as well get paid to
               | say similar things. But this time the eating (usage) of
               | the AI pudding already has the proof partly established.
               | At least that is for me the sign to see it coming.
        
               | palata wrote:
               | > But this time the eating (usage) of the AI pudding
               | already has the proof partly established.
               | 
               | I don't know, I don't see it this way. Bitcoin at the
               | very beginning was an impressive piece of technology
               | (still is), but it clearly did more harm than good.
               | 
               | Autonomous cars had some very impressive demos, and don't
               | seem to have evolved much in a few years (it's
               | impressive, but not good enough).
               | 
               | AI is making very impressive demos now, but it's not
               | changing the world (what seems to work really well right
               | now is to generate convincing disinformation at scale,
               | which is not good).
               | 
               | To me it's all the same: impressive demo, but what
               | matters is not the first 80%, it's the last 20. Everyone
               | seems to be building infrastructure around what AI will
               | become. Just like people built entire companies betting
               | on the fact that cryptocurrencies would become global
               | (they have not), or that blockchain would be useful
               | outside from cryptocurrencies (it is not). Or like people
               | built entire companies around "the new world with
               | autonomous vehicles".
        
               | pixl97 wrote:
               | >Autonomous cars had some very impressive demos, and
               | don't seem to have evolved much in a few years
               | 
               | The problem is we got autonomous cars backwards. We
               | needed multimodal GPT first. GPT-4 can look at a picture
               | can tell you what's happening very clearly. Now, chain
               | this together really fast and you have something that
               | could be used for driving. I'm making no bets on when the
               | hardware/models will be fast/cheap/power efficient enough
               | to support this.
               | 
               | >but it's not changing the world
               | 
               | Eh, no, it is changed the world in a vast number of ways,
               | but not spread very equally. On the scientific/AI side a
               | whole bunch of "20 years away" and "We may never
               | accomplish" have been changed to "now". On the language
               | side things GPT has put any translation service on
               | notice. Summerizers too.
               | 
               | I've never been a crypto fan. I believe there are
               | purposes for it, but at the same time I have a bank that
               | does all the same things.
               | 
               | What I do (did?) not have is an AI that actually solve
               | real life problems that I run into now, and that's
               | something that is both highly useful, and something that
               | many people are willing to pay for.
        
               | marvin wrote:
               | This time it isn't a bunch of empty suits from McKinsey
               | saying it. It's the technologists that built the Internet
               | and smartphones.
               | 
               | By all means keep your own estimates of what the world
               | will look like in 10 years and act accordingly, but I'm
               | trusting my own technologist's judgement on this one.
        
               | palata wrote:
               | > keep your own estimates
               | 
               | I am not estimating anything. I'm saying that there is no
               | way to know where it will go. You are the one doing what
               | the empty suits did: you are predicting something based
               | on your beliefs only.
               | 
               | Not that you can't do it: everyone can believe what they
               | want. But it would be wise to realize that it's a belief,
               | even if you are a technologist.
        
               | JohnFen wrote:
               | Historically speaking, technologists are no better at
               | predicting the future than empty suits are.
        
             | JohnFen wrote:
             | Because it's hard to see with any clarity. The hype is so
             | extreme that it makes it hard to see anything with any sort
             | of confidence.
        
               | DennisAleynikov wrote:
               | basically this. the hype is real. the results are
               | impressive.
               | 
               | but the examples of success so far seem cherrypicked
               | leading to serious skepticism. there will be a
               | revolution, and its already in progress dethroning entire
               | business models but the future is still very uncertain.
               | 
               | what is humans role in this adventure into the
               | singularity is even moreso clouded with more questions
               | than answers
        
             | peoplefromibiza wrote:
             | > But we you see something coming right at us
             | 
             | examples?
             | 
             | We haven't even seen a war coming right at us.
             | 
             | My old eyes can't see perfectly anymore, so I would
             | probably miss it - whatever _it_ is - anyway, but if you
             | 're so sure, would you bet everything you own on it?
        
             | PaulDavisThe1st wrote:
             | From TFA, a (somewhat infamous quote from Frank
             | Rosenblatt):
             | 
             | > "[It] revealed an embryo of an electronic computer that
             | it expects will be able to walk, talk, see, write,
             | reproduce itself and be conscious of its existence. Later
             | perceptrons will be able to recognize people and call out
             | their names and instantly translate speech in one language
             | to speech and writing in another language, it was
             | predicted"
             | 
             | The section on "Past Winters" in TFA is a good part of the
             | reason why you might want to not believe what you think you
             | see coming right at us ....
        
             | [deleted]
        
           | robwwilliams wrote:
           | What some are arguing is that the future is now. A 1-in-20
           | chance of another AI winter is not worth serious
           | consideration given the enormous progress in last 5. The
           | challenges do not involve hardware at this point.
        
             | woeirua wrote:
             | You cannot say that with any certainty. While GPT4 shows
             | great promise it's still not clear at all that the current
             | architecture can keep scaling up. Presumably they're
             | already using all the data they have access to. So what's
             | left to improve performance? Changes in architecture, some
             | of which could require significantly different hardware.
        
             | JohnFen wrote:
             | > What some are arguing is that the future is now
             | 
             | Yes, I know. But there's no way to know if these people are
             | right, right now. That will only be possible to know in the
             | future, looking back, so it's still "predicting the
             | future".
        
         | aimor wrote:
         | I was recently in a work meeting with some higher ups where we
         | discussed the near-term future of AI. The idea that this is the
         | interface revolution on-par with GUIs and the WWW was
         | mentioned, and attributed to Bill Gates (I guess he must have
         | talked about this recently). I've been dwelling on everyone's
         | expectations for a few days and I see the hype cycle.
         | 
         | A good reason to consider "what if" is that it's still going to
         | take a lot of effort to get from where we are to where
         | expectations are. There's been many false starts recently:
         | self-driving cars, 3D printing, VR/AR, crypto. 10 years on and
         | they're all real technologies anyone can use and making
         | progress every day, but the wild expectations hit reality. When
         | we get there with AI (and we will, because expectations are a
         | moving goalpost) it's good to have considered what we want to
         | do about it. We don't want to waste years of resources on
         | insurmountable roadblocks, we want ideas about why what worked
         | before stopped and what easier paths forward might exist.
         | 
         | That is, I don't see many people downplaying AI but I do see
         | people with lots of unanswerable questions about how long this
         | current boom will last and where we'll wind up when it's over.
        
           | pixl97 wrote:
           | Bill Gates AI episode on the Microsoft Youtube channel
           | recently
           | 
           | https://youtu.be/bHb_eG46v2c
        
         | mcphage wrote:
         | > I find it striking that there are still so many people
         | downplaying the latest developments of AI.
         | 
         | 5-7 years ago the progress with self driving cars seemed
         | enormous, and the end of driving was just around the corner.
         | And then all progress seemed to stall or recede, and it turned
         | out that what seemed like huge progress was mostly hot air.
         | 
         | Maybe that's not what happening now, but I don't think a
         | cautious approach is unwarranted.
        
           | CuriouslyC wrote:
           | Progress is sigmoidal approaching a limit. If the minimum
           | acceptable performance is into the logarithmic portion of the
           | curve, you'll see progress seem to stall out in that manner.
           | Keep in mind that SoTA self driving AI has a better track
           | record than people outside of rare corner cases, but the
           | liability issue is crippling, so it won't see widespread
           | adoption divorced from autopilot until it's "perfect." Unlike
           | full self driving, I don't think applications of AI have a
           | functional floor that is well into diminishing returns.
        
         | waboremo wrote:
         | Been noticing the same. There's this very unique flavor of
         | comment, happening even on HN, where people downplay the _level
         | of impact_ of AI /MI/ML (whatever you're more comfortable with)
         | not whether or not it will have any impact at all.
         | 
         | Mind you, there are always a variety of possibilities that
         | people see, that is true, but this unique flavor of comment is
         | unique because I haven't seen it expressed before. Not during
         | crypto (as others are bringing it up).
         | 
         | It feels like there's a much more serious... insecurity? Maybe
         | that's too aggressive of a word, but it's one that describes
         | how this downplaying partially is rooted in this idea of a
         | replacement, and lack of job security, and an uncertainty about
         | what you should be spending your time on. All of this is so
         | unique, nobody was feeling this way during crypto, not a single
         | person. If you hated crypto, you were hating it because it
         | sounds redundant, the "fans" were annoying, the environmental
         | impact, etc - not a single one of these reasons came from this
         | assumption that crypto will be the new norm.
         | 
         | So I think that's what a lot of the other comments are missing
         | right now. The downplaying feels totally different now. As if
         | even the people who are downplaying realize what's happening,
         | but don't want it to be true.
         | 
         | I did like this blog post though, outside of what we're talking
         | about.
        
           | atleastoptimal wrote:
           | They are coping. Crypto hate is based on facts. AI hate is
           | based on what you said, insecurity.
        
           | pixl97 wrote:
           | >and lack of job security,
           | 
           | A fair number of HN'ers are the "pull yourselves up by the
           | bootstraps", "We don't need to stinking unions", "Social
           | safety nets are for the weak" types. The potential for AGI
           | causes a cognitive dissonance with them. They are intelligent
           | enough to realize that AGI would nearly completely destroy
           | their ability to make money, but at the same they don't want
           | to release their ideals that "rugged individualism" is why
           | they are better than everyone else, and they would need a
           | more fair system to survive.
        
         | majormajor wrote:
         | > next revolution on par or even greater than the emergence of
         | the www
         | 
         | Is this something you can see so clearly that you can describe
         | it in specific terms?
        
           | nuancebydefault wrote:
           | The time between rather large steps in advancement is small.
           | So small that we think, oh yet another AI advancement, I'm
           | getting used to this.
           | 
           | At first we were perplexed by a computer beating Karpov at
           | chess.
           | 
           | These days however we're not perplexed anymore that a lot of
           | creative jobs have been replaced by AI. (I think those people
           | will find another passion no doubt, that is another topic).
           | 
           | Next thing is less creative jobs. But i'm already thinking
           | about climate change and hunger mitigation. You could call me
           | an idealist, only time can tell, of course.
        
         | mmphosis wrote:
         | When I first ran NCSA Mosaic, I dismissed it. It was another 5
         | years until it started to sink in -- this was before google.
         | Other than the hype, what are _the latest developments of AI_?
         | 
         | * generate digital images from natural language descriptions.
         | ie. DALL-E
         | 
         | * suggest code and entire functions. ie. Copilot
         | 
         | * generate textual content: Generative Pre-trained Transformer.
         | ie. ChatGPT
         | 
         | The future may be in other important endeavors:
         | 
         | * train on ... data and learn how to ... to automate ...
         | 
         | I think that the creativity is in your idea and not in what is
         | being automatically generated.
        
         | danielmarkbruce wrote:
         | For context - I was skeptical until GPT 4, and I've completely
         | flipped.
         | 
         | Let people play with GPT 4 for a bit.
        
           | DennisAleynikov wrote:
           | GPT4 is clearly magic, no matter what mechanism backs it the
           | fact that we managed to get rocks to think and reason about
           | things in a convincing matter was absolutely unimaginable a
           | few years ago even.
           | 
           | but the future is very unclear.
        
             | marvin wrote:
             | The momentum of investment _even in known techniques_ makes
             | it pretty clear that there will be even more useful things
             | than GPT-4 available very shortly. Never mind in 10 years,
             | with global training  & inference capacity increased by
             | multiples.
             | 
             | It's obvious at this point that we are figuring out how to
             | create machines that think. While I agree that the future
             | is unclear, in the sense that it's hard to make exact
             | predictions, the trend is very clear. Significant
             | efficiency improvements in many types of intellectual labor
             | is almost certain.
             | 
             | I don't think it's _priced in_ yet; the somewhat
             | independent-minded among tech people are currently the only
             | ones who have front-row seats to what 's going on. But in a
             | few years it will be obvious.
             | 
             | Honestly, I think the persistent exponential improvement in
             | AI makes it obvious we're on track for superintelligent
             | machines in a few decades as well (if that!), but the
             | preceding paragraphs should at least be entering the realm
             | of Overton.
        
         | [deleted]
        
         | [deleted]
        
         | travisjungroth wrote:
         | > Is this case really worth exploring?
         | 
         | Yes! You've mentioned people downplaying, but this is someone
         | who is 95% confident there _won't_ be an AI winter in the next
         | seven years. That's really quite confident. And the definition
         | he gave makes it compatible with AI being the biggest thing
         | since sliced bread. But if it gets funded at 3x bread and
         | scales down to 1x bread, that will count.
         | 
         | What cost was this exploration? A blog post? An HN discussion.
         | I think it's so valuable for people to consider the scenarios
         | _they_ think are unlikely. I find this analysis much more
         | interesting than all the dismissive "my job is safe, AI can't
         | [friction-point-of-existing-system]" posts. It's certainly
         | worth the investment of a few hours of work.
        
           | nuancebydefault wrote:
           | You are right, even if it only leads to this HN discussion,
           | which provides us with a more nuanced view -- "Haven't looked
           | at it that way" Thanks!
        
             | travisjungroth wrote:
             | Thanks. And I do share your surprise at the number of
             | people downplaying it. It seems like some people are
             | forecasting that things that happened last month won't
             | happen. That's not a great way to get predictions right.
        
       | Hizonner wrote:
       | Well, I'm HOPING for that, but not RELYING on it...
        
         | DennisAleynikov wrote:
         | this person has the right idea :)
         | 
         | pray for a winter but prepare for a societal upheaval
        
       | WheelsAtLarge wrote:
       | I give it a 95% chance that an AI winter is coming. Winter in a
       | sense that there won't be any new ways to move forward towards
       | AGI. The current crop of AIs will be very useful but it won't
       | lead to the scary AGI people predict.
       | 
       | Reasons:
       | 
       | 1) We are currently mining just about all the internet data
       | that's available. We are heading towards a limit and the AIs
       | aren't getting much better.
       | 
       | 2) There's a limit to the processing power that can be used to
       | assemble the LLM's and the more that's used the more it will
       | cost.
       | 
       | 3) People will guard their data more and will be less willing to
       | share it.
       | 
       | 4) The basic theory that got us to the current AI crop was
       | defined decades ago and no new workable theories have been put
       | forth that will move us closer to an AGI.
       | 
       | It won't be a huge deal since we probably have decades of work to
       | sort out what we have now. We need to figure out its impact on
       | society. Things like how to best use it and how to limit its
       | harm.
       | 
       | Like they say,"interesting times are ahead."
        
         | screye wrote:
         | We are currently mining just about all the internet data that's
         | available. We are heading towards a limit and the AIs aren't
         | getting much better.
         | 
         | The entire realm of video is under-explored. Think about the
         | amount of content that lives in video. Image + text is already
         | being solved, so video isn't the biggest leap. Embodied
         | learning is underexplored. Constant surveillance is
         | underexplored.                   There's a limit to the
         | processing power that can be used to assemble the LLM's and the
         | more that's used the more it will cost
         | 
         | If the scaling law papers have shown us anything, it is that
         | that the models don't need to get much bigger. More data is
         | enough for now.                   People will guard their data
         | more and will be less willing to share it.
         | 
         | Fair. Though, companies might be able to prisoner's dilemma
         | FOMO their way into everyone's data.                   was
         | defined decades ago
         | 
         | The core ideas around self-attention came about around
         | 2015-2017. The ideas are as new as new ideas get. It's like
         | saying that the ideas for the invention of calculus existed for
         | decades before Newton because we could compute the area &
         | volume of things. Yes, progress is incremental. There are new
         | ideas out there, and we'll inevitably find something new in 20
         | years that some sad-phd is working on today, all while
         | regretting not working on LLMs themselves.
         | interesting times are ahead
         | 
         | Yep
        
         | atleastoptimal wrote:
         | 1. I'd wager AI models will begin to learn via interacting with
         | the world rather than just reading lots of text. That will
         | reduce the need for a huge corpus of training data. 2. More
         | efficient methods of training and running LLMs are emerging at
         | an exponential rate 3. There's already enough data to train an
         | AGI. 4. Transformer architecture isn't that old
        
         | thelazydogsback wrote:
         | Sure - if you used generative LLMs in one-shot conversational
         | scenarios, nothing magic is going to happen. However, if you
         | blend the LLMs with many of the techniques we've been using in
         | GOFAI over the years, such backward and forward goal
         | achievement, weak/achievement goals/tendencies/drives, non-
         | hierarchical partial-order planning, treating complex speech
         | acts and actions among multiple agents (both internal and
         | external/situated) as planning problems, using plan-recognition
         | to guess the intentions of other actors, using short- and long-
         | term feedback loops based on external and internal expectations
         | -- then you'll see something akin to AGI. And very quickly.
        
         | samvher wrote:
         | 1) Is everything really being mined already? My sense is that
         | another round of GPT-like training on YouTube, EdX and Coursera
         | data + some other large video archives (BBC and the like) could
         | still make quite a bit of difference. Text and images
         | independently is one thing, having them together in context
         | might be something else.
         | 
         | 2) The available power seems to be growing pretty rapidly and
         | dropping in prices. I think there are still quite some gains to
         | be had from architectural optimizations (both in hardware and
         | in models).
         | 
         | 4) They were defined decades ago but I think they did actually
         | seem to move us closer to AGI only recently.
         | 
         | You might be right, and there are definitely interesting times
         | ahead! But I kind of doubt that we will have decades to sort
         | out what we have and figure out its impact on society (which is
         | a bit scary).
        
         | galaxytachyon wrote:
         | Some of these are good arguments and can turn out to be true.
         | But they are equally likely to be false.
         | 
         | 1/ We humans are still generating data. To live is to generate
         | data and while it is dystopian to think about how these data
         | can be harvested, it is still possible to get more information
         | to feed the next gen AIs. And remember that right now, we have
         | only used text and images. Videos, audio, sensory inputs
         | (touch, smell, taste, etc), and even human's brainwaves are
         | still available as more training data. We are nowhere close to
         | running out of stuff to teach AIs yet.
         | 
         | 2/ Fine tuning and optimizing training has shown tremendous
         | effects in reducing the size of these LLMs. We already have
         | LLMs running on laptops and mobile phones! With reasonable
         | performance and in only half a year from the big release. There
         | is lots of room to grow here.
         | 
         | 3/ My nieces laughed when I told them TikTok is too invasive.
         | Most people outside of HN does not care about data privacy as
         | much as you might think.
         | 
         | 4/Sometimes it only takes 1 big breakthrough to open the
         | floodgate. Transistors was that point for computers and we are
         | still developing new techs based on that 100 years old
         | invention. We don't know how much potential there is in these
         | decades old AI invention especially when many of them was only
         | put into proper practice the last decade. We didn't learn the
         | big mistake until very recently after all.
         | 
         | Just some ways things can go differently. It is the future, we
         | can't really predict it. Maybe an AI can...
        
         | bstockton wrote:
         | >4) The basic theory that got us to the current AI crop was
         | defined decades ago and no new workable theories have been put
         | forth that will move us closer to an AGI.
         | 
         | I guess it really depends on what you mean by "basic theory"
         | but my view is that the framework that got us to our current
         | crop of models (vision now too, not just LLMs) is much more
         | recent, namely transformers circa 2017. If you're talking about
         | artificial neural networks, in general, maybe. ANNs are really
         | just another framework for a probabilistic model that is
         | numerically optimized (albeit inspired by biological processes)
         | so I don't know where to draw the line for what defines the
         | basic theory...I hope you don't mean backprop either as the
         | chain rule is pretty old too.
        
       | javaunsafe2019 wrote:
       | I don't even understand why we call models that predict text
       | output to a question AI.
       | 
       | For sure we will get a lot stuff automated with it in the near
       | future but this is far away from anything real intelligent.
       | 
       | It just doesn't really understand and or feel things. It's dead
       | cause it just outputs data based on it's model.
       | 
       | Intelligence contains a will and chaos.
        
       | thelazydogsback wrote:
       | > I put 5% on an AI winter happening by 2030
       | 
       | lol. 5%? - that's really laying it on the line
        
       | pixl97 wrote:
       | > Eden writes, "[Which] areas of the economy can deal with 99%
       | correct solutions? My answer is: ones that don't create/capture
       | most of the value."
       | 
       | And
       | 
       | >Take for example the sorting of randomly generated single-digit
       | integer lists.
       | 
       | These seem like very confused statements to me.
       | 
       | For example, lets take banking. It's actually two (well far more)
       | different parts. You have calculating things like interest rates
       | and issues like 'sorting integers' like above. This is very well
       | solved in simple software at extremely low energy costs. If
       | you're having your AI model spend $20 trying to figure out if
       | 45827 is prime, you're doing it wrong. The other half of banking
       | is figuring out where to invest your money for returns. If you're
       | having your AI read all the information you can feed it for
       | consumer sentiment and passing that to other models, you're
       | probably much closer to doing it right.
       | 
       | And guess what, ask SVB about 99% correct correct solutions that
       | do/don't capture value. Solutions that have correct answers are
       | quickly commoditized and have little value in themselves.
       | 
       | Really the most important statement is the last one, mostly the
       | article is telling is the reasons why AI could fail, not that
       | those reasons are very likely.
       | 
       | >I still think an AI winter looks really unlikely. At this point
       | I would put only 5% on an AI winter happening by 2030, where AI
       | winter is operationalised as a drawdown in annual global AI
       | investment of >=50%. This is unfortunate if you think, as I do,
       | that we as a species are completely unprepared for TAI.
        
         | mirekrusin wrote:
         | Can you ever be prepared for TAI? What does it even mean?
        
       ___________________________________________________________________
       (page generated 2023-03-27 23:01 UTC)