[HN Gopher] Please ignore the deluge of complete nonsense about Q*
       ___________________________________________________________________
        
       Please ignore the deluge of complete nonsense about Q*
        
       Author : mfiguiere
       Score  : 99 points
       Date   : 2023-11-24 20:14 UTC (2 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | 6gvONxR4sf7o wrote:
       | "Please ignore all the speculation... and now for my
       | speculation."
        
         | michael_nielsen wrote:
         | LeCun at least knows a lot about AI. Most of the Q* stuff is
         | coming from people who know almost nothing.
        
           | 6gvONxR4sf7o wrote:
           | But his guesses match the consensus, so it doesn't add to the
           | stuff he's criticizing.
        
             | michael_nielsen wrote:
             | Fair enough. I suppose the useful bit is that lots of
             | ignorant people have gone nuts ("OMG this is AGI!") without
             | any details. That's just hype. But, yes, to your point,
             | there is also some substantive and interesting speculation
             | from more knowledgeable people.
        
             | Cacti wrote:
             | Same could be said for you, who is also contributing
             | nothing.
        
             | samrus wrote:
             | It sounds like you prefer something that's more sensational
             | simply because it's more sensational
        
           | wkat4242 wrote:
           | Seems to be a common theme with stuff starting with Q...
        
           | imjonse wrote:
           | I've run ollama on my macbook, watched a couple videos on
           | prompt engineering, tried out stable diffusion on my phone. I
           | am even working on a startup that is basically a shiny
           | website plus an OpenAI API wrapper on the backend. What do
           | you mean I am not qualified to speculate on what Q* from
           | OpenAI is and how it is transformational for society!!?? /s
        
       | qualifiedai wrote:
       | Yann has been a refreshing source of reason and common sense with
       | regards to AI safety, regulation and open-source. I wish we had
       | more people like him and less AI doomer cultists.
        
         | hackinthebochs wrote:
         | People like his takes because he gives an authoritative gloss
         | to what they already believe. But his points are usually
         | lacking in argument or rigor. Anyone that essentially expects
         | the public to trust them when it comes to the outcome of AI/AGI
         | should be view with suspicion.
        
           | peyton wrote:
           | I mean I don't think predicting the future is something that
           | typically involves rigor. The outcome is pretty clear:
           | whatever makes a ton of money. Probably a trusted friend in
           | your pocket that sometimes helps you buy stuff. The most
           | negative predictions are silly because they don't involve
           | making a ton of money for anybody.
        
             | makeitdouble wrote:
             | The point about money is important, but we should also keep
             | in mind most outcomes will make a ton of money for someone
             | somewhere.
             | 
             | Hell, there's wars killing tens of thousands of people
             | going on right now, and a ton of money is changing hands
             | making a juicy business for whole industries.
        
           | qualifiedai wrote:
           | On the contrary, he gives good arguments about why open is
           | safer and closed is more dangerous whereas other side gives,
           | imho, convoluted arguments and asks for them to be proven
           | wrong (as opposed to trying to prove themselves right).
        
             | hackinthebochs wrote:
             | His arguments in defense of barreling forward with AI are
             | terrible. They have zero chance to convince someone who
             | doesn't share his intuitions/interests. For example:
             | https://twitter.com/ylecun/status/1718764953534939162
             | 
             | How easily smart people convince themselves of what they
             | want to be true with zero self-awareness makes me much more
             | fearful of what's to come.
             | 
             | >other side gives, imho, convoluted arguments and asks for
             | them to be proven wrong (as opposed to trying to prove
             | themselves right).
             | 
             | The question is what should our default stance be until
             | proven otherwise? I submit it is not to continue building
             | the potentially world-ending technology.
        
               | qualifiedai wrote:
               | The default in science is that the side arguing a point
               | has the burden of proving it correct. Not asking the
               | other side to prove them wrong.
        
             | avsteele wrote:
             | Please point me to an example of his good arguments.
             | 
             | I only see his posts on Twitter but haven't been impressed.
        
           | mycologos wrote:
           | At the same time, it seems like _some_ antidote is needed to
           | the breathless, quasi-mystical hype that cryptic OpenAI
           | claims seem designed to stoke. Demanding precise and
           | substantive criticisms of something about which almost no
           | technical details have been provided seems like an unfair
           | bar.
        
         | UniverseHacker wrote:
         | His arguments about safety are all just wishful thinking- he
         | never addresses the substance of "AI doomer" concerns or
         | arguments.
        
       | uoaei wrote:
       | It's hard to call anything that comes from LeCun "news": any time
       | you hear of a phenomenon in the ML/AI space, you know pretty much
       | exactly the sentiment he's going to express. His entire brand is
       | "doomers are wrong, you can trust me, I am AI daddy".
        
         | renewiltord wrote:
         | Doomers thought GPT-2 was too dangerous to release. I guess one
         | can be Dalio successful by calling doom at every instant like
         | Dalio.
        
           | uoaei wrote:
           | And one can be unfalsifiably successful by using mass media
           | to proclaim safety at every instant. The irony is that if
           | things do fail these kinds of pronouncements can no longer be
           | made using tools like Twitter, etc. on which influencers like
           | LeCun build their brands, since their existence and utility
           | depends on the stability of society.
        
           | PopePompus wrote:
           | Yes, the Anti-doomers will be right many times, and wrong at
           | most once.
        
             | wkat4242 wrote:
             | The point is moot anyway.
             | 
             | If someone can build it someone will. Laws and impending
             | Doom or not. It's probably going to be better if it's us
             | than Russia or China.
        
               | benatkin wrote:
               | That's a reductive argument, and won't work with me. What
               | we have is tantamount to an arms race, and trying to
               | suppress another country's development of tech that could
               | be used against them is a thing. We have already
               | restricted access to our microchips to China specifically
               | and Russia through broad sanctions.
               | https://www.nytimes.com/2023/10/17/business/economy/ai-
               | chips... Russia isn't poised for the current crop of AI
               | tech anyway.
        
           | TeMPOraL wrote:
           | Well, GPT-2 led directly to GPT-3. GPT-3 led to GPT-3.5 and
           | then to GPT-4. GPT-4 might lead to all of us losing our
           | source of income, so they may yet be proven right (economic
           | turmoil _can_ be an x-risk if it 's large enough).
        
       | s-xyz wrote:
       | I should maybe dig a bit deeper into what he is saying, but
       | every-time I get excited about some development I get discouraged
       | by his views. Perhaps they are realistic, but I prefer to dream
       | sometimes.
        
       | andy_xor_andrew wrote:
       | I took an AI course in college back around 2015. Just a bit
       | before AlphaGo.
       | 
       | One project was to implement a simple Q-learning action/value
       | system to play simple games, like Pacman.
       | 
       | The crypto-bros-turned-AGI-experts on twitter are spouting the
       | most uninformed, misguided garbage about this whole thing, it's
       | quite amazing to watch.
       | 
       | And I'm not saying that I am smart or an expert about Q* because
       | I took an introductory college course. I'm saying that even I,
       | _someone who knows basically nothing beyond the introductory
       | concept_ can identity that these people have _no_ clue what they
       | are talking about, and yet the have this incredible talent of
       | speaking in such an authoritative and faux-intelligent tone. It
       | 's amazing.
       | 
       | My favorites are the tweets that sound like this:
       | 
       | "So, now we know that [insert something totally wrong]. Well,
       | what if extend that further, by [another totally wrong
       | conclusion]. Here's an explanation of how this all works. A
       | thread, 1/N"
       | 
       | followed by a full thread, images included, of drivel.
        
         | nothrowaways wrote:
         | Exactly, a VC hype.
        
       | kypro wrote:
       | I'm confused. I thought people were worried about the danger of
       | some AI breakthrough? If researchers at OpenAI have developed an
       | LLM more advanced than GPT4 which can also plan is that not
       | potentially a worrying breakthrough?
        
         | laserbeam wrote:
         | It's just rumours. Everything I've read about that breakthrough
         | sounds about as thoroughly backed in reality as a generic
         | conspiracy theory.
         | 
         | There may be a breakthrough, there may not, but nothing on the
         | topic is convincing or worth reading.
        
           | jordanpg wrote:
           | Rumors that were reported on by a reputable news agency:
           | https://www.reuters.com/technology/sam-altmans-ouster-
           | openai...
        
             | mianos wrote:
             | They clarify their own article as unsubstantiated by
             | seconds sources:
             | 
             | > Reuters could not independently verify the capabilities
             | of Q* claimed by the researchers.
             | 
             | True, they are a reputable news agency, but the parent is
             | also correct, it's not highly credible.
        
       | martythemaniak wrote:
       | Q* is the new "GPT-4 has a hundred trillion parameters".
       | 
       | https://thealgorithmicbridge.substack.com/p/gpt-4-a-viral-ca...
        
       | riazrizvi wrote:
       | What are the more popular themes of the complete nonsense about
       | Q*? Anyone know? I deleted my twitter account.
        
       | wkat4242 wrote:
       | Wow I work in AI (implementation) and I have zero idea what he's
       | talking about lol
        
       | layer8 wrote:
       | I love that reply:
       | 
       | LeCun: "[Note: I've been advocating for deep learning
       | architecture capable of planning since 2016]."
       | 
       | Reply: "My understanding is Schmidhuber already solved that 10
       | years ago. Just no-one knows it yet."
        
       | ren_engineer wrote:
       | The leak about Q* feels like an olive branch to let the former
       | OpenAI board and Ilya save a bit of face, probably part of the
       | terms for Sam coming back plus it distracts from all the drama
       | and puts a positive spin on things.
       | 
       | >"The board didn't handle things well, but they were right to be
       | concerned because OpenAI did have some sort of research
       | breakthrough"
       | 
       | Not coincidence that this leaks after Sam comes back, rather than
       | before when it could have made the board look more justified in
       | their decision. This changes the story from incompetence to "it's
       | a problem only OpenAI has because they are so far ahead and close
       | to AGI". Masterful PR move to leak this and shift the narrative
        
         | iepathos wrote:
         | Good point about timing of the leak. IMO the whole fiasco still
         | reeks of incompetence and no PR move can wash that clean.
        
         | eigenvalue wrote:
         | To be fair, Altman separately alluded to a recent big
         | discovery/breakthrough in a talk shortly before all the drama
         | went down.
        
       | xg15 wrote:
       | So Q* is just A* for neural networks?
        
       | trhway wrote:
       | A conspiracy minded me may see all that saga of the last week as
       | a marketing campaign to generate excitement for the GPT-5-now-
       | with-30%-more-Q release.
       | 
       | Ane even more conspiracy-minded take - from the Russian news
       | naturally - is that it is the Great Battle for the future of
       | humanity between "doomers" (Oh, no! the AI is going to kill us
       | all, we need to stop all the work and control the GPUs like guns)
       | and "Effective Altruists" (We can do all the evil today in order
       | to achieve greater good tomorrow)
        
       ___________________________________________________________________
       (page generated 2023-11-24 23:00 UTC)