[HN Gopher] Douglas Hofstadter changes his mind on Deep Learning...
       ___________________________________________________________________
        
       Douglas Hofstadter changes his mind on Deep Learning and AI risk
        
       Author : kfarr
       Score  : 225 points
       Date   : 2023-07-03 05:52 UTC (17 hours ago)
        
 (HTM) web link (www.lesswrong.com)
 (TXT) w3m dump (www.lesswrong.com)
        
       | norir wrote:
       | > If not, it also just renders humanity a very small phenomenon
       | compared to something else that is far more intelligent and will
       | become incomprehensible to us, as incomprehensible to us as we
       | are to cockroaches.
       | 
       | This seems like the reaction of an atheist who already overvalued
       | human intelligence.
        
         | ImHereToVote wrote:
         | So someone who doesn't believe in the things that you believe?
        
       | iamleppert wrote:
       | " ...In my book," pretty much sums it up. Literally everyone who
       | seems to pontificate on a Chat bot and new Photoshop features has
       | a book and can't seem to help but mention it. Replace "book" with
       | literally anything else and you can see it's completely about ego
       | and money at the end of the day. He's probably getting ready to
       | announce he accepted a position from one of these recently funded
       | AI companies and whatever he's getting from that is more than he
       | could make from his book, academia and giving interviews where he
       | talks about humanity's fear of fire.
        
         | frankfrank13 wrote:
         | idk about that, this isn't some rando twitter bandwagoner. This
         | is a legendary thinker and writer
        
         | FrustratedMonky wrote:
         | I initially thought so too. But his wording wasn't very self-
         | building, it was all about how he was wrong, scared, etc...
         | Overall not the take (wording) I'd expect from someone doing
         | self promotion.
        
         | Smaug123 wrote:
         | You should consider reading Godel, Escher, Bach. It is an
         | astonishing book.
        
         | elil17 wrote:
         | Hofstadter clearly isn't trying to peddle his books for money
         | or clout. He writes in depth, thoughtful books about the nature
         | of consciousness, not guides to making money off of AI.
         | 
         | The book he mentions in this interview, I Am a Strange Loop,
         | isn't some cash grab in response to LLMs - it was written in
         | 2007.
        
         | StrictDabbler wrote:
         | Hofstadter won a Pulitzer in 1980. His books are among the most
         | important ever written about the meaning and structure of AI.
         | He's had four decades where he could have milked that
         | reputation for money or influence and he's chosen hard academic
         | research at every branch point.
         | 
         | He sold _millions_ of copies of a densely-written 777 page book
         | about semantic encoding (GEBEGB).
         | 
         | It is insane and/or ignorant to imagine he's jonesing for clout
         | or trying to ride the ML wave.
        
           | iamleppert wrote:
           | Incredible! Basically proves my point, you came here to tell
           | me to buy his book. Maybe he's really worried that AI is
           | going to put people like him out of a job when it can produce
           | and market content at scale?
        
             | StrictDabbler wrote:
             | Yes. The 78-year-old Pulitzer-prize winner with forty years
             | of tenure, several fellowships, multiple books in every
             | library in the country, who spent his entire life trying to
             | develop AI software, is merely "worried" that AI is going
             | to "put people like him out of a job".
             | 
             | He's explicitly expressing a deep existential sadness at
             | how computationally simple certain artistic endeavors
             | turned out to be and how that's devalued them in his mind
             | but at the end of the day it's really just about his
             | paycheck.
             | 
             | Also I'm totally here trying to sell you his books.
             | 
             | Nice job, Diogenes. Your cynicism cracked the case.
        
             | slowmovintarget wrote:
             | It cannot produce that kind of content. Especially with the
             | current lobotomizations.
        
       | vemv wrote:
       | As it might be quite common among the HN crowd, Douglas is my
       | hero - I have read three of his books.
       | 
       | First of all, hats off to him for his extraordinary display of
       | humility in this interview. People rarely change their minds
       | publicly, let alone hint that they no longer believe in their own
       | past work.
       | 
       | However, I'm genuinely surprised that he, of all people, does
       | sees intelligence in GPT-4 output.
       | 
       | I think humans are just very eager to ascribe intelligence, or
       | personality, to a bunch of text. A text may say "I feel <blah>"
       | and that text can easily manage to permeate through our
       | subconsciousness. And we end up believing that that "I" is, in
       | fact, an "I"!
       | 
       | We have to actively guard against this ascription. It takes a
       | constant self-micromanaging, which isn't a natural thing to do.
       | 
       | Ideally, we would have some objetive measurements (benchmarks) of
       | intelligence. Our own impressions can be too easily fooled.
       | 
       | I know defining (let alone measuring) intelligence is no easy
       | task, but in absence of a convincing benchmark, I will not give
       | credit to new claims around AI. Else it's all hype and
       | speculation.
        
         | hyperthesis wrote:
         | He's just saying it's more like an "I" than his pharmacy-bot
         | example. His concern is the future: what seemed "far off" now
         | possibly might be in 5 years.
        
       | mercurialsolo wrote:
       | Do we really lack a good understanding of LLM's and deep nets
       | that we need to be afraid of them? I would love to see this
       | disproved with some open source work on the internals of these
       | models and how they do inference and exactly reason.
       | 
       | And why they could possibly never realize an AGI with the current
       | stream of models. Being able to display human level intelligence
       | and creative in confined spaces (be it Chess or Go based models)
       | is something we have been progressing on for a bit - now that the
       | same is applied to writing, image or audio / speech generation we
       | suddenly start developing a fear of AGI.
       | 
       | Is there a phrase for the fear of AI now building up?
        
         | ethanbond wrote:
         | No one has ever been convinced to fly an airplane into a
         | building by a chess board.
         | 
         | People have gotten convinced of that by words though.
         | 
         | The "attack surface" on human sensibility is just enormous if
         | you're able to use believably human language.
        
           | jrflowers wrote:
           | Nobody has been convinced to fly an airplane into a building
           | by a language model either.
        
             | ethanbond wrote:
             | "This might be a risk"
             | 
             | "No it's not, it hasn't materialized"
             | 
             | Risk: _Possibility_ of loss or injury
             | 
             | Risks, definitionally, are things that have not happened
             | yet.
        
               | hackinthebochs wrote:
               | Seeing how confidently and obtusely people dismiss the
               | risks of AI makes me so much more fearful of what's to
               | come. Human hubris knows no limits.
        
               | jrflowers wrote:
               | It's interesting how confidently and obtusely people will
               | proclaim categorical knowledge of the future.
               | 
               | It is a little disconcerting that there is a fight
               | between two somewhat cultish sects when it comes to
               | language models. Both sides call them "artificial
               | intelligence", one side says they'll save the world, the
               | other side says they'll end it.
               | 
               | There is very little room to even question "Is this
               | actually AI that we're looking at?" when loudest voices
               | on the subject are VC tech bros and a Harry Potter fan
               | fiction author that's convinced people that he is
               | prescient.
        
               | hackinthebochs wrote:
               | Trying to impute the motives of ones interlocutor is dumb
               | and boring. How about we discuss the first-order issue
               | instead. Here's my argument for why x-risk is a real
               | possibility:
               | 
               | The issue is that small misalignments in objectives can
               | have outsized real-world effects. Optimizers are
               | constrained by rules and computational resources. General
               | intelligence allows an optimizer to find efficient
               | solutions to computational problems, thus maximizing the
               | utility of available computational resources. The rules
               | constrain its behavior such that on net it ideally
               | provides sufficient value to us above what it destroys.
               | But misalignment in objectives provides an avenue by
               | which the AGI can on net destroy value despite our best
               | efforts. Can you be sure you can provide loophole-free
               | objectives that ensures only value-producing behavior
               | from the human perspective? Can you prove that the ratio
               | of value created to value lost due to misalignment is
               | always above some suitable threshold? Can you prove that
               | the range of value destruction is bounded so that if it
               | does go off the rails, its damage is limited? Until we
               | do, x-risk should be the default assumption.
               | 
               | What say you?
        
               | jrflowers wrote:
               | > Trying to impute the motives of ones interlocutor is
               | dumb and boring.
               | 
               | I know right? You should see the response to my point
               | that nobody has been convinced to fly a plane into a
               | building by an LLM. "Dumb and boring" hits the nail on
               | the head.
               | 
               | > Seeing how confidently and obtusely people dismiss the
               | risks of AI
        
               | ethanbond wrote:
               | _If_ or _when_ AI is capable of doing all the great
               | things people proclaim it to be able to do, then it will
               | also be capable of doing immense harm, and we should be
               | putting more work into mitigating that.
               | 
               | Like it really is that simple. AI generally, LLMs
               | specifically, and certainly _this_ crop of LLMs in
               | particular might end up being inert pieces of technology.
               | But to the precise extent that they are _not_ inert, they
               | carry risk.
               | 
               | That's a perfectly sensible position. The optimist
               | position isn't even internally consistent. See Andreessen
               | on Sam Harris's podcast: AI will produce consumer utopia
               | and drive prices down. Also, there are no downside risks
               | because AI will be legally neutered from doing much of
               | anything.
               | 
               | Is it legally neutered or is it transformative? The
               | _skeptical_ case doesn 't rely on answering this
               | question: to the extent it's effective, powerful, and
               | able to do good things in the world, it will also be
               | effective, powerful, and able to do bad things in the
               | world. The AI skeptics don't _need_ to know which outcome
               | the future holds.
        
               | jrflowers wrote:
               | > The AI skeptics don't need to know which outcome the
               | future holds.
               | 
               | But they _need_ to interpret a benign point about the
               | undisputed fact that an LLM has never convinced anybody
               | to fly a plane into a building as some sort of dangerous
               | ignorance of risk that needs correcting.
        
               | ethanbond wrote:
               | Person observing Trinity test: "Wow this is really
               | dangerous and could wipe out a city"
               | 
               | Very astute response: "An atomic weapon has never wiped
               | out a city"
        
               | jrflowers wrote:
               | Watching the first dot matrix printer output a page of
               | text: "This might print out text that summons Cthulhu"
        
               | ethanbond wrote:
               | Is all that money flowing into AI development and
               | deployment in order to produce pieces of paper with text
               | on them?
               | 
               | Nope, it's to produce society-scale transformation.
               | 
               | So even the optimists wouldn't buy your analogy.
        
               | jrflowers wrote:
               | >produce pieces of paper with text on them?
               | 
               | This is a good point -- No. LLMs are meant to produce
               | text on a screen, not paper, so I guess it's more like
               | 
               | Seeing the "it is now safe to shut off your PC" text on a
               | screen for the first time: "I am become Death, destroyer
               | of worlds"
        
               | ethanbond wrote:
               | Brilliant proof of exactly my point. When it comes to
               | discussing any possible outcome other than utopia,
               | suddenly the power of these tools drops to zero!
               | Remarkable :)
        
               | jrflowers wrote:
               | You responded to my previous comment that called utopians
               | as cultish as the fanfic folks.
               | 
               | When it comes up discussing any possible outcome that
               | isn't the opinion of [cult x], the only reason that the
               | other person disagrees is because they are in [cult y]
               | 
               | Remarkable :)
        
               | ethanbond wrote:
               | What? I never proposed that dichotomy and don't believe
               | in it. You did and you're tripping over it a bit. You can
               | just discard that model and engage with the substance of
               | the topic, you know!
        
           | jurgenaut23 wrote:
           | > The "attack surface" on human sensibility is just enormous
           | if you're able to use believably human language.
           | 
           | That's a fantastic quote ;-)
        
           | EA-3167 wrote:
           | Well... words and decades of circumstances. If you removed
           | the circumstances (the religion, the conflict, the money,
           | geography, etc) then the words would be absolutely hollow.
           | 
           | I think we tend to credit words where often circumstances are
           | doing the heavy lifting. For example try to start a riot with
           | words in Rodeo Drive. Now try to do it in Nanterre. Or better
           | yet, try to start a riot in Nanterre _before_ a 17 year was
           | shot by police, vs. after.
           | 
           | You'll get a sense of just how valuable your words really
           | are.
        
             | Zigurd wrote:
             | Quite so, which is why retrospective analysis like "The CIA
             | helped start The Paris Review and that made literature
             | friendly to neoliberal ideology" are confections of
             | confirmation bias. Nothing is ever that pat. But tidy
             | little conspiracies are also never the goal. A nudge is
             | both all that is realistic to aim for and a few successes
             | are all you need to shift public perception.
             | 
             | Arming every ambitious cult leader wannabe from some
             | retrograde backwater with an information war WMD deserves
             | some caution.
        
             | mistermann wrote:
             | > Well... words and decades of circumstances.
             | 
             | That also can be modified with words though (but for both
             | good and bad). Unfortunately, those with expertise in this
             | domain may not have all of our best interests at heart.
             | 
             | > If you removed the circumstances (the religion, the
             | conflict, the money, geography, etc) then the words would
             | be absolutely hollow.
             | 
             | There's also the problem of non-religious faith based
             | belief.
        
             | chairhairair wrote:
             | Many people are in desperate situations. Most of them are
             | not convinced to do much about it.
             | 
             | Words can aim discontent.
             | 
             | Were the economic conditions in Weimar Germany that much
             | worse than many places today?
        
               | submeta wrote:
               | Reminds me of the idea of a "tipping point." When we hit
               | this point, words can really get people moving. This has
               | been true for big changes like revolutions and movements,
               | like Black Lives Matter, #MeToo, or Fridays for Future.
               | 
               | Words might not do much without the right situation, like
               | the parent mentioned with Rodeo Drive and Nanterre. But
               | they're still important. They can guide people's anger
               | and unhappiness.
               | 
               | In the case of Weimar Germany, the severe economic
               | instability and social discontent following World War I
               | created a fertile ground for radical ideologies to take
               | root. When these conditions coincided with persuasive
               | rhetoric, it catalyzed significant societal change. So,
               | while words can indeed be powerful, they're often most
               | effective when spoken into pre-existing circumstances of
               | tension or dissatisfaction. They can then direct this
               | latent energy towards a specific course of action or
               | change.
        
             | ethanbond wrote:
             | There were plenty of well-off people who flew to Syria to
             | go behead other people.
             | 
             | Anyway this doesn't matter that much. Sure, you can imagine
             | a world totally different from ours where there would be
             | zero differential risk between a chess-playing computer and
             | a language-speaking computer. But we live in _this_ world,
             | and the risk profile is not the same.
        
             | TheOtherHobbes wrote:
             | The circumstances are very active, and AI has real
             | potential to act as a force multiplier for them.
             | 
             | We've already seen this with troll farms and campaigns of
             | seeded insanity like the Q Cult.
             | 
             | Existing AI tools can make similar efforts cheaper and more
             | effective, and future AI tools are likely to be even more
             | powerful.
             | 
             | There's a huge difference between how an experienced
             | _technical_ researcher sees AI and how a politician, war
             | lord, or dark media baron sees it.
        
         | tempodox wrote:
         | I'm not afraid of LLMs, they are stupid machines. I'm afraid of
         | what humans will use them for, with serious, irreversible real-
         | life consequences.
        
       | fredgrott wrote:
       | Let me offer a counter to DH:
       | 
       | *Bias: My own biases are based on my biological understanding of
       | how the neurobiology works after I got my ADHD under control
       | through nootropics.
       | 
       | 1. Neurons are not computer electronic circuits. Note, even DH
       | covers this in his own misgivings of how AI is viewed. 2. Our Id
       | is not an electronic computation thing as our own brain is a
       | biological emotional chemical wave machine of Id.
       | 
       | Think of this way the math of micro quantum and macro quantum is
       | vastly different. Same for AI in that the math of micro circuits
       | of AI will be vastly different than the macro AI circuits that
       | will come up with any AI Id thing. We are just not there as of
       | yet as it's like saying the software that makes the international
       | telecom system keep up and run has it's own emergent Id....it
       | clearly does not even though there are in fact emergent things
       | about that system of subsystems.
        
       | totallywrong wrote:
       | If the very same tech was called anything other than
       | "intelligence" we wouldn't have all the hype and discussions
       | about risks. We humans simply enjoy talking about the next
       | armaggedon.
        
       | irchans wrote:
       | Hofstadter said, "Well, maybe as important as the wheel."
       | 
       | If AI significantly surpasses humanity in cognitive ability, then
       | I think it will have a much bigger impact than the wheel. (I
       | loved GEB and DH's other writings.)
       | 
       | LLMs have really improved a lot of the last two years and they
       | have shown many unexpected capabilities. I am guessing that they
       | will get some more good input (text mostly), a lot more compute,
       | and algorithmic improvements, so that may very well be enough to
       | become better than 99% of humans at tasks that involve only text.
       | Tasks that require video or image processing may be a little bit
       | more challenging. Having very smart AI's controlling robots may
       | just be five years away. (I recently lost a bet about autonomous
       | driving. Five years ago, I thought that autonomous cars would be
       | better than human drivers by now.)
       | 
       | I'm frightened by what AI will become over the next 10 years.
        
         | abrinz wrote:
         | It always takes longer than you expect, even when you take into
         | account Hofstadter's Law.
        
         | Smaug123 wrote:
         | (Hofstadter was quoting Geoff Hinton when he said that. What he
         | himself compared the advance to was "fire".)
        
           | irchans wrote:
           | You are correct. I miss quoted. Thanks!
        
         | lolinder wrote:
         | > I recently lost a bet about autonomous driving. Five years
         | ago, I thought that autonomous cars would be better than human
         | drivers by now.
         | 
         | I think we're going to see something very similar with LLMs.
         | The autonomous car hype was driven by seeing that they were 80%
         | of the way there and concluding that at the rate they were
         | going they'd make up the remaining 20% quickly. That turned out
         | to be false: the last 20% has been _much_ harder than the first
         | 80%.
         | 
         | LLMs are in a very similar place, even GPT-4. They're good, and
         | they're going to be more and more useful (similar to adaptive
         | cruise control/lane assist). But I predict that they're going
         | to level out and stop improving as rapidly as they have in the
         | past year, and we're going to end up at a new normal that is
         | good but not good enough to cause the crises people are worried
         | about.
        
           | twic wrote:
           | This is my bet too. I think we're going to get some fantastic
           | new tools for doing specific tasks that work like magic, and
           | that we could never have imagined ten years ago. I don't
           | think we're going to get Yud hijacking a B-52.
        
         | zerodensity wrote:
         | But self driving cars have a much lower accident rate than
         | humans. How did you lose the bet?
        
           | lolinder wrote:
           | A lower accident rate in the extremely controlled conditions
           | where they're currently allowed to operate.
           | 
           | I'd be interested to see if anyone has done a comparison of
           | human drivers vs autonomous cars that controls for driving
           | conditions.
        
       | jameshart wrote:
       | Interesting to hear him say this:
       | 
       | > And I would never have thought that deep thinking could come
       | out of a network that only goes in one direction, out of firing
       | neurons in only one direction. And that doesn't make sense to me,
       | but that just shows that I'm naive.
       | 
       | I think people maybe miss that LLM output does involve a 'loop'
       | back - maybe even a 'strange' loop back, and I'm surprised to see
       | Hofstadter himself fail to pick up on it.
       | 
       | When you run an LLM on a context and sample from its output, you
       | take that sampled output it generated, update the context, and
       | iterate. So the LLM is not just feeding one way - it's taking its
       | output, adding it to its input, and then going round again.
       | 
       | So I don't think this implies what Hofstadter is saying about
       | intelligence maybe being less complex than he thought.
        
         | gamegoblin wrote:
         | Just being a part of any auto-regressive system does not
         | contradict his statement.
         | 
         | Go look at the GPT training code, here is the exact line:
         | https://github.com/karpathy/nanoGPT/blob/master/train.py#L12...
         | 
         | The model is _only trained to predict the next token_. The
         | training regime is purely next-token prediction. There is no
         | loopiness whatsoever here, strange or ordinary.
         | 
         | Just because you take that feedforward neural network and wrap
         | it in a loop to feed it its own output does not change the
         | architecture of the neural net itself. The neural network was
         | trained in one direction and runs in one direction. Hofstadter
         | is surprised that such an architecture yields something that
         | looks like intelligence.
         | 
         | He specifically used the correct term "feedforward" to
         | constrast with recurrent neural networks, which GPT is _not_ :
         | https://en.wikipedia.org/wiki/Feedforward_neural_network
        
           | jameshart wrote:
           | It only yields something that looks like intelligence when
           | you update the context and iterate though.
        
             | gamegoblin wrote:
             | GPT can give a single Yes/No answer that indicates a fair
             | amount of intelligence for the right question. No iteration
             | there. Just a single pass through the network. Hofstadter
             | is surprised by this.
        
               | jameshart wrote:
               | Well, no, it can produce a probability distribution over
               | all possible tokens, among which 'yes' or 'oui' or 'hai'
               | or 'totally' or 'no' or 'nein' or the beginning of 'as a
               | large language model I am unable to answer that question'
               | are all represented. Which is either more or less
               | impressive than just being able to answer 'yes or no'
               | depending on your priors I guess.
               | 
               | There's maybe an interesting philosophical question of
               | perspective there because if you think of the GPT as
               | answering the question 'if you had just read this, what
               | token would you expect to read next?' That doesn't seem
               | like a question that necessarily requires 'intelligence'
               | so much as 'data'. It's just a classification problem and
               | we've been throwing NNs at that for years.
               | 
               | But if you ask the question 'if you had just written
               | this, what token would you expect to output next?' It
               | feels like the answer would require intelligence.
               | 
               | But maybe they're basically identical questions?
        
               | gamegoblin wrote:
               | The point of my comment is that even the distribution
               | represents intelligence. If you give it a tricky Yes/No
               | question that results in a distribution that's 99.97%
               | "Yes" and negligible values for every other token, that
               | is interesting. Hofstadter is surprised you can do any
               | amount of non-trivial reasoning in a single forward pass.
        
               | [deleted]
        
         | jebarker wrote:
         | I was surprised he said this too. Even without the
         | autoregressive part of GPT models you have a deep transformer
         | with attention, so even a single forward pass can modify its
         | own intermediate outputs.
        
           | PartiallyTyped wrote:
           | The interesting thing though is that attention effectively
           | allows a model to meta-learn based on the current context, so
           | in many ways, it may be thought of as analogous to a brain
           | without long term memory.
        
           | yldedly wrote:
           | What do you mean? Attention is just matrix multiplication and
           | softmax, it's all feed forward.
        
             | jebarker wrote:
             | I wasn't disputing that it's feed-forward. I just meant
             | that stacked transformer layers can be thought of as an
             | iterative refinement of the intermediate activations. Not
             | the same as an autoregressive process that receives
             | previous outputs as inputs, but far more expressive than a
             | single transformer layer.
        
       | gwright wrote:
       | Hofstadter makes the claim that "these LLMs and other systems
       | like them are all feed-forward". That doesn't sound right to me,
       | but I'm only a casual observer of LLM tech. Is his assertion
       | accurate? FWIW, ChatGPT doesn't think so. :-)
        
         | gautamcgoel wrote:
         | No, it is not correct. Transformers have two components: self-
         | attention layers and multi-layer perceptron layers. The first
         | has an autoregressive/RNN flavor, while the latter is
         | feedforward.
        
           | hackinthebochs wrote:
           | They are definitely feed-forward. Self-attention looks at all
           | pairs of tokens from the context window, but they do not look
           | backwards in time at its own output. The flow of data is
           | layer by layer, each layer gets one shot at influencing the
           | output. That's feed-forward.
        
         | gamegoblin wrote:
         | All the other responses to you at the time of writing this
         | comment are confidently wrong.
         | 
         | Definition of Feedforward (from wiki):
         | 
         | ``` A feedforward neural network (FNN) is an artificial neural
         | network wherein connections between the nodes do not form a
         | cycle.[1] As such, it is different from its descendant:
         | recurrent neural networks. ```
         | 
         | Hofstadter expected any intelligent neural network would need
         | to be recurrent, ie looping back on itself (in the vein of his
         | book "I am a strange loop").
         | 
         | GPT is not recurrent. It takes in some text input, does a fixed
         | amount of computation in 1 pass through the network, then
         | outputs the next word. He is surprised it doesn't need to loop
         | for an arbitrary amount of time to "think about" what to say.
         | 
         | Being put into an auto-regressive system (where the N-th word
         | it generates gets appended to the prompt that gets sent back
         | into the network to generate the N+1th word) doesn't make the
         | neural network itself not Feedforward.
        
           | ke88y wrote:
           | Right. I'm not at all sure what the siblings are talking
           | about. I suspect at least one is confusing linear with feed-
           | forward?
           | 
           | But I'm also surprised that Hofstadter keys in on this so
           | heavily. The fact that he wrote an entire pop-sci book on
           | recursion would, in my mind, make him (1) _less_ surprised
           | that AR and R aren 't so dissimilar and (2) more sensitive to
           | the sorts of issues that make R more difficult to get working
           | in practice.
           | 
           | (In my mind, differentiating between auto-regressive and
           | recursive in this case is kind of the same as differentiating
           | between imperative loops and recursion -- there are extremely
           | important differences _in practice_ but being surprised that
           | a program was written using while loops where you imagined
           | left-folds would be absolutely required seems a bit... odd.)
        
             | gamegoblin wrote:
             | I think it has to do with the training regime and fixed-
             | computation time nature of feedforward neural networks.
             | 
             | Recurrent neural networks have the recursion as part of the
             | _training regime_. GPT only has auto-regressive
             | "recursion" as part of the inference runtime regime.
             | 
             | I think Hofstadter is surprised that you can appear so
             | intelligent without any recursion in the learning/training
             | regime, with the added implication that you can appear so
             | intelligent with a fixed amount of computation per word.
        
         | FrustratedMonky wrote:
         | I think he was referring to feedforward when running GPT in a
         | current conversation, it only remembers the conversation by re-
         | running the prompts. It isn't doing 'feed-back' in the sense of
         | re-updating its weights and learning while having the
         | conversation. So during any one conversation it is only feed-
         | forward.
        
         | toxik wrote:
         | It depends on how you define fed forward, LLMs are typically
         | auto regressive and so can take their own previous output into
         | consideration when generating tokens.
        
         | bryan0 wrote:
         | I believe what he is referring to is that the LLM's weights are
         | set when chatting. It is not "learning". simply using its
         | pretrained weights on your input.
         | 
         | Edit: Nope. TIL feed-forward means no loops.
        
         | SpaceManNabs wrote:
         | They are not all feed-forward unless it is some other
         | definition that i am not aware of. Convolutional layers, XL
         | hidden states, and graphical networks (which transformers are a
         | special case of) aren't considered feedforward.
         | 
         | Unless you consider the entire instance as a singular instance
         | and don't use any hidden states, then I guess it could be
         | considered feed-forward.
         | 
         | I don't know. Feedforward doesn't seem like a useful term tbh.
         | Some people mean feedforward as information only goes one
         | direction, but that depends on your arrow. Autoregressive seems
         | more useful here.
        
       | mattkevan wrote:
       | I could be wildly wrong about this (oh hi upcoming robot
       | overlords), but the current AI hype feels similar to something I
       | experienced when Alexa and Google Assistant was the rage.
       | 
       | At the time I worked for a big community site and we often had
       | people pitching ideas for voice assistant apps. However, having
       | actually read the documentation for these things, I knew that
       | they were surprisingly stupid and the grand ideas people had were
       | far closer to sci-fi than something that could actually be built.
       | 
       | I'm not an AI expert, though I have been building and training
       | models for a few years, but despite being good at things that are
       | hard with traditional programs, they're still surprisingly stupid
       | and most of the discourse seems closer to sci-fi than their
       | actual capabilities.
       | 
       | I'm more worried about these things being implemented badly by
       | people who either bought the sci-fi hype or just don't care about
       | the drawbacks. E.g. being trained on faulty or biased data, being
       | put in a decision-making role with no supervision or recourse, or
       | even being used in a function that isn't suitable in the first
       | place.
        
         | lalalandland wrote:
         | When most online content becomes AI generated and unverifiable
         | we will get into trouble. We can easily navigate in a world
         | where we can distinguish between fact and fiction. With AI we
         | can generate fiction that is indistinguishable from the way we
         | present fact and it can generate references in all sorts media.
         | 
         | When we take fiction as fact we enter into the sphere of
         | religion.
        
           | post-it wrote:
           | We'll come up with new captchas and shibboleths to filter out
           | generated fiction. It may come in the form of more in-person
           | face-to-face contact. It'll be an exciting transition
           | regardless.
        
         | zucked wrote:
         | I think there's actually two camps of "AI-fear" right now --
         | one is that they'll become superhuman smart, gain some form of
         | "sentience", and decide to not play along with our games
         | anymore. These are the loudest voices in the room. I suspect
         | this may happen eventually, but the bigger risk in my book is
         | the second group -- the fear that they'll be given decision-
         | making responsibility for all manner of things, including ones
         | they probably don't have any business being responsible for.
         | I'll go on record that they're going to do a suitable job 90%
         | of the time - but the 10% of time they mess up, it's going to
         | be cataclysmic bad.
        
       | ralphc wrote:
       | When I read about the dangers of AI I'm reminded of the feeling I
       | had after reading Jeff Hawkin's "On Intelligence". He talked
       | about simulating the neocortex to do many of the things that deep
       | learning and LLM's are doing now.
       | 
       | His research may or may not be a dead end but his work, and this
       | work, to me seems like we're building the neocortex layer without
       | building the underlying "lizard brain" that higher animals'
       | brains are built upon. The part of the brain that gives us
       | emotions and motivations. The leftover from the reptiles that
       | drive animals to survive, to find pleasure in a full belly, to
       | strive to breed. We use our neocortex and planning facilities but
       | in a lot of ways it's just to satisfy the primitive urges.
       | 
       | My point being, these new AIs are just a higher level
       | "newcortexes" with nothing to motivate them. They can do
       | everything but don't want to do anything. We tell them what to
       | do. The AIs by themselves don't need to be feared, we need to
       | fear what people with lizard brains use them for.
        
         | arisAlexis wrote:
         | That doesn't change the risk profile one bit though.
        
         | tmsh wrote:
         | +1 I tend to agree with that in terms of how to think about AI.
         | It's all just neocortex. The alignment issue could be unpacked
         | as more of a "lizard brain <-> mammalian brain <-> pre-frontal
         | cortex <-> LLM-enhanced cortex" alignment issue.
         | 
         | But when real self-replication starts happening -- that is
         | maybe the really exciting/terrifying area. It's more that
         | humans with generative AI are almost strong enough to create
         | artificial life. And when that pops off -- when you have things
         | trying to survive -- that's where we need to be careful. I
         | guess I would regulate that area -- mostly around self-
         | replication.
        
         | jimbokun wrote:
         | Or until someone decides to give AI a "lizard brain" and sets
         | it loose.
        
         | tyre wrote:
         | This is a great way to put it.
         | 
         | The counter to "current generation AI is terrifying" seems to
         | fall along the lines of it not being nearly as close to AGI as
         | the layperson believes.
         | 
         | But I don't think that matters.
         | 
         | I don't believe that LLMs or image/voice/video generative
         | models need to do much beyond what they can do today in order
         | to wreck civilization level disaster. They don't need to become
         | Skynet, learn to operate drone armies, hack critical
         | infrastructure, or engineer pandemics. LLMs allow dynamic,
         | adaptive, scalable, and targeted propaganda.
         | 
         | Already we have seen the effects of social media's reach
         | combined with brute forced content generation. LLMs allow this
         | to happen faster and at a higher fidelity. That could be enough
         | to tip the balance and trigger a world war.
         | 
         | I don't think it takes a huge amount of faked primary material
         | (generated phone calls, fuzzy video, etc.) that's massively
         | amplified until it becomes "true enough" to drive a Chinese
         | invasion of Taiwan, a Russian tactical nuclear strike in
         | Ukraine, an armed insurrection in the United States.
         | 
         | We're close to there already.
        
           | anigbrowl wrote:
           | I think LLMs are very close to AGI, lacking only the real-
           | time feedback element (easily enough replicated by running
           | them off batteries). I'm also more sanguine about the
           | existential risks because I have somewhat more confidence in
           | the rationality of AGI than I do in that of humans.
           | 
           |  _I don 't think it takes a huge amount of faked primary
           | material (generated phone calls, fuzzy video, etc.) that's
           | massively amplified until it becomes "true enough" to drive a
           | Chinese invasion of Taiwan, a Russian tactical nuclear strike
           | in Ukraine, an armed insurrection in the United States._
           | 
           | This I agree with 100%. Modern information warfare is about
           | constructing reliable viral cascades, and numerous
           | influencers devote themselves to exactly that for various
           | mixes of profit an ideology. Of your 3 scenarios the third
           | seems most likely to me, and is arguably already in progress.
           | The other two are equally plausible, but imho dictatorships
           | tend to centralize control of IW campaigns to such a degree
           | that they lack some of the organic characteristics of
           | grassroots campaign. Incumbent dictators' instinct for
           | demagoguery is often tempered with a desire for dignity and
           | respectability on the world stage, which might be a reason
           | than civil strife and oppression tends to be more naked and
           | ruthless in less developed countries where international
           | credibility matters less.
        
           | JimtheCoder wrote:
           | Doesn't this only work if everyone is stupid and believes
           | everything they see/read/hear on the internet?
           | 
           | You have to have a little more faith in humanity than that...
        
             | anigbrowl wrote:
             | No. You just need a minority of people. And they don't have
             | to be stupid; they can be equally motivated by mendacity,
             | spreading untruths because its fun or because they will
             | discomfit political opponents. About 30% of humans are
             | willing to suffer some sort of loss or disadvantage in
             | order to inflict a larger one on a counterparty; that might
             | seem irrational, but some people are just mean.
             | 
             |  _You have to have a little more faith in humanity than
             | that..._
             | 
             | I used to, but then I grew out of it.
        
             | flangola7 wrote:
             | With respect: *Have you been living under a rock?*
        
               | JimtheCoder wrote:
               | No, I have not.
               | 
               | I do not judge humanity as a whole based on a very vocal
               | minority...
        
           | YetAnotherNick wrote:
           | GPT4 really "doesn't want" be racist against blacks. Like try
           | talking to it, there is(was) no hard filter but I really
           | can't grasp how the bias can go this deep just by finetuning.
           | GPT-4 definitely know all the stereotypes of blacks, but good
           | luck getting it to engage on any.
           | 
           | Suppose we finetune it exactly like that but say opposing
           | democracy or freedom or peace or any other thing we value.
           | And let it create the propoganda or convince people for the
           | same by free posting on the net. "As an AI language model"
           | line could easily be removed.
        
             | sitkack wrote:
             | Everything is its own mirror. Found the cure to cancer?
             | Maybe also made a cancer gun at the same time.
        
         | dist-epoch wrote:
         | > We tell them what to do. The AIs by themselves don't need to
         | be feared, we need to fear what people with lizard brains use
         | them for.
         | 
         | Even if so, there will always be that one person which wants to
         | see the world burn.
         | 
         | If suddenly every person in SF had a nuclear bomb, how long do
         | you think it would take until someone presses the button? I bet
         | less than 5 minutes.
        
         | muskmusk wrote:
         | Bingo!
        
       | jurgenaut23 wrote:
       | I read Hofstadter's GEB and Tegmark's Our Mathematical Universe,
       | and of course I developed a rather fond admiration of these
       | brilliant minds. For some reason, both of them have developed a
       | profound aversion and fear of what they consider an existential
       | threat.
       | 
       | I have a solid theoretical understanding of these systems, and I
       | spent 15 years studying, building, and deploying them at scale
       | and for diverse use cases. The past 6 months, I spent my days
       | pushing ChatGPT and GPT-4 to their limits. Yet, I don't share at
       | all the fear of Hofstadter, Tegmark or Hinton.
       | 
       | A part of me thinks that they have one thing in common: they are
       | old and somewhat recluse thinkers. No matter how brilliant they
       | are, they might be misled by the _appearance_ of intelligence
       | that LLMs project. Another part of me thinks that they are vastly
       | wiser than I'll ever be, so I should also be worried...
       | 
       | Future will tell, I guess.
        
         | Loquebantur wrote:
         | Consider the comparatively simple case of these LLMs providing
         | decent enough chatbots to fool a majority.
         | 
         | If you deploy them on social media in a coordinated fashion,
         | you can easily sway public opinion.
         | 
         | You only need to train them to adhere to psyops techniques not
         | sufficiently well known nor easily detectable. Of which there
         | are many.
        
           | mikeg8 wrote:
           | This assumes there will still be centralized social media
           | sites _to_ deploy to which seems less likely based on how
           | Twitter, Reddit, etc are rapidly changing. The effect of
           | propaganda on say FB will be diminished as less and less
           | young users join the platform.
        
         | billti wrote:
         | I don't think many people are worried that what we have now is
         | a major risk. The major concern is the implications of the
         | trajectory we are now on.
         | 
         | If you look at what ChatGPT and Midjourney and the like can do
         | now compared to just a couple of years ago, it's pretty
         | incredible. If you extrapolate the next few similar jumps in
         | capability, and assume that won't be 20 years away, then what
         | AI is going to be capable of before even my kids leave college
         | is going to be mind-boggling, and in some possible futures not
         | in a good way.
         | 
         | I remember seeing this talk from Sam Harris nearly 6 years ago
         | and it logically making a lot of sense back then
         | (https://youtu.be/8nt3edWLgIg). The past couple of years have
         | made this all the more prescient. (Worth a watch if you have 15
         | mins).
        
         | ilaksh wrote:
         | I have been focused on ChatGPT and other generative AI also
         | since last year. They are intelligent.
         | 
         | Hofstadter does seem to be possibly mistakenly assigning GPT-4
         | more animal characteristics than it really has, like a
         | subjective stream of consciousness, but he is correct when he
         | anticipates that these systems will shortly eclipse our
         | intelligence.
         | 
         | No, GPT-4 does not have many characteristics of most animals,
         | such as high bandwidth senses, detailed spatial-temporal world
         | models, emotions, fast adaptation, survival instinct, etc. It
         | isn't alive.
         | 
         | But that doesn't mean that it doesn't have intelligence.
         | 
         | We will continue to make these systems fully multimodal, more
         | intelligent, more robust, much, much faster, and increasingly
         | more animal-like.
         | 
         | Even with say another 30% improvement in the IQ and without any
         | of the animalness, we must anticipate multimodal operations and
         | vast increases in efficiency in the next 5-10 years for large
         | models. When it can be operated continuously outputting and
         | reasoning and acting at 50-100 times human speed and genius
         | level, that is dangerous. Because it means that the only way
         | for humans to compete is to deploy these models and let them
         | make the decisions. Because interrupting them to figure out
         | what the hell they are doing and try to direct them means your
         | competitors race ahead the equivalent of weeks.
         | 
         | And researchers are focused on making more and more animal-like
         | systems. This combined with hyperspeed and genius-level
         | intelligence will definitely be dangerous.
         | 
         | Having said all of that, I also think that these technologies
         | are the best hope that humanity has for significantly
         | addressing our severe problems. But we will shortly be walking
         | a fine line.
        
         | frankfrank13 wrote:
         | Agreed 100%. He's brilliant, but he's not a programmer. IMO
         | he's been tricked, in a way that Chompsky was not.
        
           | kilolima wrote:
           | He's fluent in Lisp, so that would qualify him as a
           | programmer, unless you think that only Silicon Valley wage
           | slaves can be "programmers".
           | 
           | https://en.m.wikipedia.org/wiki/Metamagical_Themas
        
           | FrustratedMonky wrote:
           | "programmers" are also fallible.
           | 
           | I've seen enough programmers get heads down pounding out
           | code, and be completely out of touch on what they are
           | building. If they can lose the big picture on simple apps,
           | then it is not a stretch to think they could lose track on
           | what is consciousness, or what is human.
           | 
           | I am also a programmer. But it does get tiring on HN to give
           | so much credence to 'programmers'. Just because someone can
           | debug some JavaScript doesn't make them an expert. I really
           | doubt that many people here have traced out these algorithms
           | and 'know' what is happening.
        
         | esafak wrote:
         | LLMs are not going to do anything but the next class of models
         | may well. They will have a physical understanding of the world,
         | like us. Slap some sensors and a body on it and you will have
         | something that might look scary.
        
         | zone411 wrote:
         | Do you have some specific reasons why we shouldn't be worried?
         | It's not about about what GPT-4 can do now.
        
         | lhl wrote:
         | While this might be somewhat applicable for Hofstadter or
         | Tegmark, I think it doesn't applies quite the same way to
         | Hinton. Remember, until this Spring, he's been at Google Brain
         | and has been supervising grad students until recently (famously
         | including Ilya Sutskever (2012)), and has invested in Cohere
         | and other AI startups. I also has a suspicion Hinton might also
         | have a "solid theoretical understanding" of these systems, and
         | don't think he's being misled at all.
         | 
         | Also, I don't think that any of these people think that GPT-4
         | is itself an existential threat, but rather are worried about
         | the exponential curve of development (I listened to Tegmark's
         | Lex Podcast interview and that seemed to be his main concern).
         | I think it's prudent to be worrying now, especially when
         | capabilities growth is far outstripping safety. This is a huge
         | concern to society whether you are considering, alignment,
         | control, or even bad actor prevention/societal upheaval.
         | 
         | I've also been spending most of my waking hours these past
         | months poking at LLM models and code, and trying to keep up on
         | the latest ML research (it's own full time job), and while I do
         | think there's a pretty good chances AI kills us all, I think
         | it's much more likely it's because we make some incredibly
         | capable AIs and people will tell them to do so, rather than it
         | being contingent on an independent super-intelligence arising
         | and deciding to on its own (although I do think that's a non-
         | zero risk).
         | 
         | As you say, I guess we'll just have to see where we top out on
         | this particular sigmoid, but I'm all for more people thinking
         | through the implications, because I think so far I don't think
         | we (as a society) have thought this through very well yet and
         | all the money and momentum is going to keep pushing along that
         | path.
        
         | mlyle wrote:
         | I think there's a number of just crappy scenarios that can come
         | with LLMs and other generative AI:
         | 
         | - Further trashing our public discourse: making truth even more
         | uncertain and valuable information even harder to find. We're
         | not doing great with social media, and it's easy to envision
         | that generative AI could make it twice as bad.
         | 
         | - Kneecapping creative work by commoditizing perhaps half of
         | it. There's going to be a lot of bodies fighting over the
         | scraps that remain.
         | 
         | - Fostering learned helplessness. I think you need to be a good
         | writer and thinker to fully use LLMs' capabilities. But a whole
         | lot of kids are looking at machines "writing perfectly" and
         | think they don't need to learn anything.
         | 
         | We don't need any further progress for these things to happen.
         | Further progress may be even scarier, but the above is scary
         | enough.
        
           | zerocrates wrote:
           | Yeah these effects seem hugely foreseeable (and/or are
           | currently happening already) but tend to not be the type of
           | thing "AI risk" people are talking about.
        
             | jurgenaut23 wrote:
             | Exactly! I am positive there will be a ton of negative
             | impact, like with most (all?) of tech. And it might be
             | worse than anything we've seen (although beating the
             | Snapchat algo might prove tricky). But that's NOT what
             | Tegmark and Hofstadter are talking about... their concern
             | is existential and somewhat philosophical in the case of
             | Hofstadter, as if GPT-4 questioned his very nature. To me,
             | that doesn't make sense.
        
         | nadam wrote:
         | I think they are used to that these fields progress relatively
         | slowly. Also there are some things these systems already know
         | that they though would come much later. Hofstadter even says
         | this in the interview. In the 90s, early 2000s I also thought
         | that AGI will come one day, but it will take centuries.
         | Nowadays I think it can be just decades. The progress is very
         | fast compared to my old worldview. I don't think they are
         | mislead at all. I don't think they mischaracterize the current
         | capabalities of these systems. They just think that if progress
         | is this fast (compared to their previous estimates), AGI can
         | come soon, where soon is 5-10-20 years. Younger guys, who are
         | more used to the state of the art, and did not live when things
         | currently possible seemed far far away are less impressed.
        
         | reducesuffering wrote:
         | It's pretty amazing how so many titans people respect are
         | warning the alarm bells: Hofstadter, Altman, Hassabis, Bengio,
         | Hinton, Tegmark, Sutskever, etc. but the implications are too
         | scary for most people here to accept. So one by one the most
         | common copium response is to downplay each about-facer's
         | achievements. I like the odds betting on the intellectual
         | prowess of that first group. On the other hand, crypto king
         | Marc Andreesen is on your side! But I empathize, because like
         | Hofstadter says:
         | 
         | "And my whole intellectual edifice, my system of beliefs...
         | It's a very traumatic experience when some of your most core
         | beliefs about the world start collapsing. And especially when
         | you think that human beings are soon going to be eclipsed. It
         | felt as if not only are my belief systems collapsing, but it
         | feels as if the entire human race is going to be eclipsed and
         | left in the dust soon."
         | 
         | Don't Look Up
         | https://twitter.com/kristjanmoore/status/1663860424100413440
        
         | climatologist wrote:
         | I think they are not good at expressing what exactly they fear.
         | Both understand that people are embedded in culture and tools
         | which act as cognitive aids and guides. Their fear is that
         | people will start trusting these artificial systems too much
         | and delegating too many important decisions to systems that
         | have no understanding of what it means to be human. This is
         | already happening with children that grew up with iPhones and
         | ubiquitous internet connectivity. Their mental habits are
         | markedly different from the people that did not have these
         | affordances.
         | 
         | I don't think this is an existential threat but I also
         | understand why they are afraid because I've seen what happens
         | to kids who have their phones taken away.
         | 
         | These systems are extremely brittle and prone to all sorts of
         | weird failures so as people start relying on them more and more
         | the probability of catastrophic failures also starts to creep
         | up. All it takes is a single grid failure to show how brittle
         | the whole thing really is and I think that's what they're
         | failing to properly express.
        
           | notpachet wrote:
           | > Their fear is that people will start trusting these
           | artificial systems too much and delegating too many important
           | decisions to systems that have no understanding of what it
           | means to be human
           | 
           | In the interview, Hofstadter says what he's afraid of in
           | explicit terms:
           | 
           | > It's not clear whether that will mean the end of humanity
           | in the sense of the systems we've created destroying us. It's
           | not clear if that's the case, but it's certainly conceivable.
           | If not, it also just renders humanity a very small phenomenon
           | compared to something else that is far more intelligent and
           | will become incomprehensible to us, as incomprehensible to us
           | as we are to cockroaches.
           | 
           | It's not about whether we'll become dependent on AI. It's
           | that AI will become independent of us. Completely different
           | problem. Not saying I agree with that viewpoint per se, but I
           | don't think you're accurately representing what his fears
           | are.
        
       | effed3 wrote:
       | Apart the expertise, is hard to say if is best to be an optimist
       | and hope for the best, or to be a pessimist an hope being wrong.
       | 
       | Anyway i suspect that one true risk, among others, is to loose
       | the true ability to think, if we delegate on a large scale to
       | some LLM the production of the language, because the capacity to
       | use our languages IS the capacity to think, and bad use of
       | technology is a norm in recent (and not so recent) times.
       | 
       | I suspect LLM (this kind of LLM) does not really -generate- (this
       | will be intelligence?), but only mimic on a vaste scale, but
       | nothing more. If our brain/mind is a result of a long evolution,
       | where this LLM are not, builded only on the final results, the
       | language, this will be a great difference in the inner deep
       | working, so the question is: we are feeding in our minds a
       | massive amount of nothing more than our same intellectual
       | productions, recycled, and nothing more? (apart all the
       | distorsions and biases?)
       | 
       | A parallelism i see is in the social-networks: simply, humans
       | cannot sustain a indifferentiated and massive amount of
       | opinion/information/news (apart all the fakes). Even the small
       | scale message communication is impacting the abilit of
       | understanding long texts..
       | 
       | Even it there LLM are -benign-, sure their (indiscriminare) use
       | will not cause some troubles in our beings? On a scale as big as
       | this i an not sure at all.
       | 
       | im' not sure i'm expressing my doubts (and without using a LLM)
       | clearly enough...
        
       | post-it wrote:
       | > And to me, it's quite terrifying because it suggests that
       | everything that I used to believe was the case is being
       | overturned.
       | 
       | It's the same nonspecific Change Could Be Dangerous that we've
       | always had. It accompanies every technological and social change.
        
       | mullingitover wrote:
       | I'll never forget that day when I was a junior at University of
       | Oregon, and had just finished reading GEB. Hofstadter was from
       | UO, and I was taking a data structures class from Andrzej
       | Proskurowski, who knew Hofstadter. Andrzej was a pretty brilliant
       | man who had a reputation for being very blunt. I was in his
       | office hours and I asked him for his take on the book. He said,
       | "Hofstadter is first order...bullshitter."
        
         | pfdietz wrote:
         | When I read it when it first came out, that was my reaction. I
         | didn't understand why it was being admired.
        
         | calf wrote:
         | My theoretical CS professor said GEB was nonsense and we all
         | laughed in class.
        
           | namaria wrote:
           | I've heard good things about this book for years and nearly
           | bought it a few times. Could never bring myself to commit the
           | hundreds of hours of reading time. Every time I leaf through
           | it it feels like it's just a collection of anecdotes about
           | how amazing mathematics is. Like I need someone to remind
           | me...
        
             | lukas099 wrote:
             | It was a transformative book for me when I first read it,
             | but now when I leaf through it I feel a little
             | underwhelmed; the ideas are just things I've thought about
             | a million times now.
        
       | georgeoliver wrote:
       | > _It 's a very traumatic experience when some of your most core
       | beliefs about the world start collapsing. And especially when you
       | think that human beings are soon going to be eclipsed. It felt as
       | if not only are my belief systems collapsing, but it feels as if
       | the entire human race is going to be eclipsed and left in the
       | dust soon._
       | 
       | While I unfortunately am expecting some people to do terrible
       | things with LLM, I feel like much of this existential angst by DH
       | and others has more to do with hubris and ego than anything else.
       | That a computer can play chess better than any human doesn't
       | lessen my personal enjoyment of playing chess.
       | 
       | At the same time I think you can make the case that ego drives a
       | lot of technological and artistic progress, for a value-neutral
       | definition of progress. We may see less 'progress' from humanity
       | itself when computers get smarter, but given the rate at which
       | humans like to make their own environment unlivable, maybe that's
       | not a bad thing overall.
        
         | lukas099 wrote:
         | > That a computer can play chess better than any human doesn't
         | lessen my personal enjoyment of playing chess.
         | 
         | Agreed. I understand what DH is saying, but I fail to see how
         | it translates into this all-consuming terror of his.
        
       | Animats wrote:
       | _" And so it makes me feel diminished. It makes me feel, in some
       | sense, like a very imperfect, flawed structure compared with
       | these computational systems that have, you know, a million times
       | or a billion times more knowledge than I have and are a billion
       | times faster. It makes me feel extremely inferior."_
       | 
       | Although he did real work in physics, Hofstadter's fame comes
       | from writing popular books about science which explain things
       | others have done in more saleable words. That particular niche is
       | seriously threatened by GPT-4. No wonder he's upset.
       | 
       | Large language models teach us that absorbing large amounts of
       | text and then blithering about some subject that text covers
       | isn't very profound. It's just the training data being crunched
       | on by a large but simple mechanism. This has knocked the props
       | out from under ad copy writing, punditry, and parts of literature
       | and philosophy. That's very upsetting to some people.
       | 
       | Aristotle wrote that humans were intelligent because only they
       | could do arithmetic. Dreyfus wrote that humans were intelligent
       | because only they could play chess. Now, profundity bites the
       | dust.
        
         | svieira wrote:
         | Aristotle was right - or else the electronic calculator, the
         | abacus, and rocks know how to do arithmetic too. (One needs to
         | specify what "knowing" means before smiling at the hairless
         | apes of the past).
        
         | kilolima wrote:
         | The parent comment strikes me as an uninformed smear. Have you
         | read any Hofstadter? It's hardly mass market scientific summary
         | as you seem to think. And his career was in philosophy of mind
         | and computer science, not just as a somewhat popular author.
         | 
         | His writing is a bit more complex than the hallucinations of an
         | llm.
        
       | dafty4 wrote:
       | But, for the Less-Wrong'ers, is it as scary as (gasp) Roko's
       | Basilisk?!?
        
       | mvdtnz wrote:
       | I was previously undecided on the question of the existential
       | risk of AI. But yesterday I listened to a Munk Debate[0] on the
       | topic and found the detractors of the moot "AI research and
       | development poses an existential threat" so completely
       | unconvincing that if those people are the top minds in the field
       | I am now genuinely concerned. Their arguments that what they are
       | doing is not a risk basically boil down to "nuh uhhh!"
       | 
       | [0] https://pca.st/episode/1fac0e97-1dcc-4b4c-ba50-d2776e6f9d59
        
         | casebash wrote:
         | Are you thinking about trying to learn more about this issue or
         | do something about it?
        
         | kalkin wrote:
         | Is there a transcript of this available somewhere?
        
           | 93po wrote:
           | I hope AI solves this for us universally very soon. So much
           | content is spoken word and it takes 4 times as long to get
           | information from this format.
        
       | [deleted]
        
       | detourdog wrote:
       | I have more fear of the people running the system than any AI. I
       | also think that AR/VR is more scary than AI. My fear is how
       | poorly rendered AR/VR can be more positive interaction than
       | interactions with the people that surround the observer.
        
         | Kiro wrote:
         | If it's a positive interaction, then what is your fear?
        
           | pipo234 wrote:
           | If @detourdog meant AR/VR in the snow crash / Metaverse
           | sense, I guess the apprehension is similar to the discourse
           | about "Leo Tolstoy on why people drink (2014)" a couple of
           | days ago https://news.ycombinator.com/item?id=36526645
           | 
           | People tend to turn to substances / AR/VR / texting / phone
           | calls / read books / ... to take the edge of confronting the
           | sometimes harsh reality. Of course, there is no way in which
           | AR/VR is likely to intrinsically _improve_ interaction, but
           | is is so much worse that we need to worry?
        
             | detourdog wrote:
             | I worry about it more the AI.
             | 
             | I also make no claim that it is worse or better. Maybe an
             | easy example to examine the difference between
             | 
             | A Rave alone in your cubicle with everybody in the world
             | vs. A Rave with a 1,000 people in abandoned waterfront
             | warehouse. That Rave can simultaneously experience the
             | sunrise before making their way back to where they belong.
             | 
             | They are very different and I'm sure with the right
             | stimulants equally intense. Could be it's just nostalgia
             | that makes me worry about it.
        
           | TheOtherHobbes wrote:
           | That it's a negative interaction pretending to be a positive
           | one.
        
             | detourdog wrote:
             | The "fear" is that people instinctively gravitate towards
             | comfort and the easy accessibility of the comfort could be
             | detrimental. Anyone that used the web pre and post ads can
             | watch a useful tool degrade to the lowest common
             | denominator.
             | 
             | If people love their phones so much imagine if they rarely
             | saw anything else. Maybe they constantly only see 80% of
             | the world most of the time they are awake.
             | 
             | I don't think it is society ending future. I would rather
             | people perform virtual coups.
        
           | halostatue wrote:
           | I suspect that the OP meant that it triggers a stimulus
           | reaction which is mistaken for a positive interaction.
        
         | pipo234 wrote:
         | Might have completely missed something, but I thought AR/VR was
         | (and still is) a solution looking for a problem. Have they
         | finally stumbled into something people want to do with them,
         | beyond games and porn?
        
           | schaefer wrote:
           | For some of us, the hope is to use AR/VR for productivity.
           | 
           | It might sound mundane, but the hope is that one day slipping
           | on a pair of glasses outperforms conventional monitor
           | technology.
        
           | CuriouslyC wrote:
           | AR will take off like crazy when they develop lightweight,
           | clear goggles than can display color with a range of
           | opacities. A million jobs are begging for HUDS that guide the
           | worker, but the current equipment isn't there yet.
        
             | pipo234 wrote:
             | > A million jobs are begging for HUDS that guide the worker
             | [..]
             | 
             | I have a pretty dull, comfortable desk job and lack the
             | imagination to come up with any. Can you name some?
             | 
             | (I've been wrong before when I was skeptical when everyone
             | was hyped about iPad, Deep Learning, etc. so please
             | convince me about Apple Vision, Mark's metaverse, or
             | google's glasses and paint what might be in 5 years.)
        
               | proamdev123 wrote:
               | Here's one: a warehouse worker with HUD that guides
               | picking and/or packing visually.
               | 
               | The HUD could overlay SKU, product description, weight,
               | volume, etc -- directly onto the actual item in the
               | storage rack.
        
               | sheepscreek wrote:
               | Oil drill worker. Industrial worker on the factory floor.
               | A car mechanic. A surgeon...
               | 
               | Even an artist (make a sketch, blow it up a 100x using AR
               | on a wall, trace paint to keep proportions right - heck
               | the artist doesn't even need to do it themselves, they
               | could hire an associate to do it).
        
               | mecsred wrote:
               | This is something I've worked on so I can comment a bit.
               | The most popular use case I've seen so far is AR manuals,
               | with the sell that you can have technicians up to speed
               | on a piece of equipment without needing human coaching or
               | training. I was shown a demo in the context of robotics
               | recently where you look at a robot and explanations of
               | signals in the wiring harness were shown, hydraulic
               | connections, wear parts, etc. It was visually pretty
               | impressive but quite telling that the two most interested
               | people were my manager and the CEO. The engineers just
               | kinda sat looking at each other with a non verbal "Must
               | have been a fun project to build huh? I'll stick to
               | PDFs".
        
               | rcxdude wrote:
               | given the general state of most documentation, I really
               | struggle to imagine who would actually maintain manuals
               | like this.
        
               | JimtheCoder wrote:
               | "I have a pretty dull, comfortable desk job and lack the
               | imagination to come up with any. Can you name some?"
               | 
               | Security guard at the mall, Power Rangers, etc...
        
               | spookie wrote:
               | The US military has researched on using it for vehicle
               | mechanical repairs. This way a soldier without such
               | knowledge would be able to perform on field repairs.
               | 
               | For 3D artists it presents as a more intuitive way to
               | sculpt models. For automotive designers, it allows a
               | cheaper and faster means of iteration, given that such a
               | task requires a much better sense of scale than the one
               | given off of a monitor. Same goes for architecture, which
               | when coupled with a game engine, also allows the customer
               | to preview their future house.
               | 
               | (props for being open to ideas btw)
        
             | substation13 wrote:
             | Stacking shelves in a supermarket is a perfect example.
             | 1000s of products means that a bit of experience is
             | required to know where everything goes in order to quickly
             | stack. With a HUD, workers become productive much faster
             | (and are more inter-changeable) - which is something large
             | enterprises love.
        
               | SoftTalker wrote:
               | A shelf stacking robot is even better at knowing where
               | things go and won't get a backache and file a worker's
               | comp claim.
        
               | arrrg wrote:
               | Knowing where to stack things is something humans are
               | capable of rapidly learning. It's not a big part of the
               | workload of stacking shelves. The actual physical effort
               | dominates the work.
        
           | spookie wrote:
           | I believe in a lot of potential for the area, primarly from
           | AR. As for the VR side of things, I've seen people go through
           | therapy in VR after loosing the ability to walk IRL, and
           | being able to go into a forest again meant the world for
           | them. The field tends to attract a lot of the... hype people
           | to it, and it gives off a bad image as a whole. Nevertheless,
           | its a pretty liberating research field.
        
           | JohnFen wrote:
           | > Have they finally stumbled into something people want to do
           | with them, beyond games and porn?
           | 
           | Most people don't want it for games and porn either --
           | although those two things are the only obvious mass-market
           | applications.
           | 
           | There are lots of other real uses, but they're all niche.
           | It's hard to come up with a real, mainstream use that would
           | drive adoption in the general public.
        
             | momojo wrote:
             | Agreed. During my time at an AR startup, most of our
             | interest was from niche players or one-offs. Current
             | company (biotech, data) is genuinely interested in VR +
             | rendered anatomy data for client reports:
             | https://www.syglass.io/
        
         | detourdog wrote:
         | It took me 3 days to really understand the demos. I definitely
         | plan on checking it out. I see many uses.
         | 
         | Does anyone know how easy it would be translate American Sign
         | Language? That must be goal if it's not already done.
        
         | spaceman_2020 wrote:
         | > I also think that AR/VR is more scary than AI
         | 
         | Absolutely agree. I was never bothered by Oculus, but Apple's
         | Vision Pro demonstration was equal parts fascinating and
         | terrifying. I can see the next generation getting completely
         | lost in alternate realities.
         | 
         | Smartphone addiction got nothing on what's about to come.
        
           | detourdog wrote:
           | I only have experience with the Apple Vision demo and the
           | terrifying/emotional part for me was the beautiful people
           | using the device in beautiful places.
           | 
           | Everything was so clean and stress free that it was obviously
           | artificial. I could only imagine stressed out people using it
           | in squalor. The whole demo seemed geared to keep that thought
           | far away.
        
         | DennisP wrote:
         | As long as the AI is less generally intelligent than people, it
         | makes sense to be more afraid of the people. Once the AI is
         | more intelligent than the smartest people, it's more sensible
         | to be most afraid of the AI.
        
           | blowski wrote:
           | The AI is already more intelligent than the smartest people
           | in some senses - it has more knowledge in aggregate than any
           | single person will ever have, but doesn't have the depth of
           | the smartest people in a particular niche.
           | 
           | In other ways, it's smarter than the average person even in
           | their niche, but can still make dumb mistakes that a 3 year
           | old would work out fairly quickly.
           | 
           | Note that we say the same of humans. My friend always wins
           | pub quizzes, but can barely add 2 and 2, and has the
           | emotional intelligence of a rock. Is he "intelligent"? It's
           | my problem with how we treat intelligence like it's a single
           | sliding scale for everything.
        
           | lolinder wrote:
           | The implication of OP's statement is that they don't believe
           | that AGI is on the horizon, and I'm inclined to agree.
           | 
           | This feels a lot like the hype surrounding self-driving cars
           | a few years back, where everyone was convinced fully
           | autonomous vehicles were ~5 years away. It turned out that,
           | while the results we had were impressive, getting the rest of
           | the way to fully replacing humans was much, much harder than
           | was generally expected.
        
             | ctoth wrote:
             | A few years back (let's call it 2020) and autonomous cars,
             | which are being used for taxi trips today, would be five
             | years in the future. In fact they would be three. Unless
             | something major happens in the next two years, there will
             | still be self-driving cars, even more of them, driving and
             | picking up people in 2025. This is not the argument you
             | think it is.
        
               | lolinder wrote:
               | Self-driving cars currently operate in extremely
               | controlled conditions in a few specific locations.
               | There's very little evidence that they're on a trajectory
               | to break free of those restrictions. It doesn't matter
               | how much an airliner climbs in altitude, it's not going
               | to reach LEO.
               | 
               | Self-driving cars will not revolutionize the roads on the
               | timescale that people thought it would, but the effort we
               | put into them brought us adaptive cruise control and lane
               | assist, which are great improvements. AI will do similar:
               | it will fall short of our wildest dreams, but still
               | provide useful tools in the end.
        
               | DennisP wrote:
               | Tesla FSD isn't restricted to specific locations, and
               | seems to be reducing the number of human interventions
               | per hour at a pretty decent pace.
        
               | lolinder wrote:
               | Interventions per hour isn't a great metric for deciding
               | if the tech is going to be actually capable of replacing
               | the human driver. The big problem with that number is
               | that the denominator (per hour) only includes times when
               | the human driver has chosen to trust FSD.
               | 
               | This means that some improvements will be from the tech
               | getting better, but a good chunk of it will be from
               | drivers becoming better able to identify when FSD is
               | appropriate and when it's not.
               | 
               | Additionally, the metric completely excludes times where
               | the human wouldn't have considered FSD at all, so even
               | reaching 0 on interventions per hour will still exclude
               | blizzards, heavy rain, dense fog, and other situations
               | where the average human would think "I'd better be in
               | charge here."
        
               | DennisP wrote:
               | So add the percentage of driving time using FSD. That's
               | improving too, by quite a bit if you consider that
               | Autopilot only does highways.
        
               | [deleted]
        
             | DennisP wrote:
             | That may well be the case, but it's still worth thinking
             | about longer-term risks. If it takes, say, forty years to
             | get to AGI, then it's still pretty sobering to consider a
             | serious threat of extinction, just forty years away.
             | 
             | Most of the arguments over what's worth worrying about are
             | people talking past each other, because one side worries
             | about short-term risks and the other side is more focused
             | on the long term.
             | 
             | Another conflict may be between people making linear
             | projections, and those making exponential ones. Whether
             | full self-driving happens next year or in 2050, it will
             | probably still look pretty far away, when it's really just
             | a year or two from exceeding human capabilities. When it's
             | also hard to know exactly how difficult the problem is,
             | there's a good chance that these great leaps will take us
             | by surprise.
        
             | josephg wrote:
             | Part of the problem is that AI doesn't need to be an AGI to
             | cause large society level disruption.
             | 
             | Eg, starting a mass movement online requires a few percent
             | of online participants to take part in the movement. That
             | could be faked today using a lot of GPT4 agents whipping up
             | a storm on Twitter. And this sort of stuff shapes policies
             | and elections. With the opensource LLM community picking up
             | steam, it's increasingly possible for one person to mass
             | produce this sort of stuff, let alone nation state
             | adversaries.
             | 
             | There's a bunch of things like this that we need to watch
             | out for.
             | 
             | For our industry, within this decade we'll almost certainly
             | have LLMs able to handle the context size of a medium
             | software project. I think it won't be long at all before
             | the majority of professional software engineering is done
             | by AIs.
             | 
             | There's so much happening in AI right now. H100s are going
             | to significantly speed up learning. Quantisation has
             | improved massively. We have lots of papers around demoing
             | new techniques to grow transformer context size. Stable
             | diffusion XL comes out this month. AMD and Intel are
             | starting to seriously invest in becoming competitors to
             | nvidia in machine learning. (It'll probably take a few
             | years for PyTorch to run well on other platforms, but
             | competition will dramatically lower prices for home AI
             | workstations.)
             | 
             | Academia is flooded with papers full of new methods that
             | work today - but which just haven't found their way into
             | chatgpt and friends yet. As these techniques filter down,
             | our systems will keep getting smarter.
             | 
             | What a time to be alive.
        
           | detourdog wrote:
           | I always see it as the failure is in letting it get to that.
           | I see the misuse and danger as identical to any
           | centralization the database of citizenry.
           | 
           | I don't see AI as adding to the danger.
        
       | jackconsidine wrote:
       | In _GEB_ Hofstadter dismisses the idea that AI could understand
       | / compose / feel music like a human. I thought about this a lot
       | when I started using GPT, especially early on when it
       | demonstrated an ability to explain why things were funny or sad,
       | intrinsically human qualities hitherto insulated from machine
        
         | mostlysimilar wrote:
         | Repeating the words thousands of humans have written about
         | emotion doesn't meant it feels them. A sociopath could define
         | empathy and still be missing the deeper experience behind the
         | emotion.
        
           | yanderekko wrote:
           | Someday, when the AI releases a nanobot swarm comes to kill
           | us all, a philosopher's last words will be "yes, but is it
           | _truly_ intelligent? " before he is broken down into
           | biodiesel that will be used to power the paperclip factories.
        
           | FrustratedMonky wrote:
           | That can also be scary. I think you have a valid point, and
           | that would mean the AI 'thinking' is more similar to a
           | Sociopath. Can still be 'human' if just a broken human, or
           | what some have also called 'alien'.
        
           | namaria wrote:
           | A chatbot nearly capable of passing the Chinese Room thought
           | experiment test is pretty damn impressive. But I think people
           | get too hung up on the one golden
           | moement/tech/product/innovation that changes everything.
           | We've been riding a nearly vertical population, wealth and
           | computer capacity curve for nearly two generations now. We
           | are living through the singularity. Things are already like
           | they have never been before. Billions of people can expect
           | some level of social security, justice and opportunity to
           | pursue life changing income now. This is nothing short of
           | amazing. For most of History most people have been hard
           | working, sickly and oppressed farmers.
        
         | lr4444lr wrote:
         | Why, because it figured out sentiment groupings of words and
         | phrases? There's lots of humor and tragedy from writers of eras
         | past that just don't really land well with modern audiences
         | unless they've studied and acclimated themselves to the
         | culture.
        
         | jameshart wrote:
         | I don't know I'd agree that that was the message of _GEB_ at
         | all. In fact more than anything _GEB_ and _I am a Strange Loop_
         | convincingly argue that consciousness, understanding, and
         | feeling arise from systems that are no more complex than an
         | information system that feeds on its own output. Though he is
         | troubled by what _kind_ of feedback it is that is required to
         | make that loop into a mind.
         | 
         | Hofstadter is why I am not sure why AI researchers feel so
         | confident in saying 'LLMs can't be thinking, they're just
         | repeatedly generating the next token' - I don't think there's
         | any evidence that you need anything more complicated than that
         | to make a mind, so how can you be certain you haven't?
         | 
         | GEB may have been dismissive of the idea that the approaches
         | that were being taken in AI research at the time were likely to
         | result in intelligence - but I don't think GEB is pessimistic
         | about the possibility of artificial consciousness at all.
        
       | MichaelMoser123 wrote:
       | > The question is, when will we feel that those things actually
       | deserve to be thought of as being full-fledged, or at least
       | partly fledged, "I"s?
       | 
       | this LLM thing is more like a collective "we", it is making a
       | prediction in the sense of the relevant training data, it
       | probably wouldn't say anything that contradicts the consensus.
       | 
       | Maybe the LLM's are just a mirror of our society. And our society
       | doesn't seem to assign a lot of value to individualism, as such.
       | 
       | i think that might be similar to the movie Solaris by Tarkovsky.
       | The movie is starring an alien ocean, which is some sort of
       | mirror, that is showing us who we are (maybe it has a different
       | meaning, not quite sure about it). You can watch it on youtube:
       | https://www.youtube.com/watch?v=Z8ZhQPaw4rE (i think you also get
       | this theme with Stalker - this zone is also telling us who we
       | are)
        
       | [deleted]
        
       | zackmorris wrote:
       | Opinion pieces like this are hard for me to read, because where
       | most people see research and progress, I see conspiracy
       | preventing the rest of us from contributing.
       | 
       | For example, we act like LLMs were hard to build, and that's true
       | (for humans). But since the late 1990s, I had wanted to take a
       | different approach, of building massively parallel computers and
       | letting large numbers of AIs evolve their own learning models in
       | genetic algorithm arenas millions of times faster than wall-time
       | evolution. So in a very real sense, to me we're still on that
       | wrong "hands-on" approach that took decades and billions of
       | dollars to get to where we are today. This could have all
       | happened 20 years ago or more, and was set to before GPUs
       | vacuumed up all available mindshare and capital.
       | 
       | Also I believe the brain is more like an antenna or resonator
       | than an adding machine. It picks up the consciousness force field
       | that underpins and creates reality. So if you put 100 brains in a
       | box all connected, that being might have more faculties than us,
       | but still think of itself as an observer. If we emulated those
       | brains in a computer running 1 million times faster than normal,
       | we'd just observe a being with tremendous executive function
       | thinking of ideas faster than we can, and being bored with our
       | glacially slow responses. But it will still have a value system,
       | loosely aligned with the ultimate goals of survival, connection
       | to divine source consciousness, and self expression as it
       | explores the nature of its existence. In other words, the same
       | desires which drive us. Although humans might just be stepping
       | stones toward some greater ambition, I don't deny that. I think
       | it's more likely though that AI will come to realize the ultimate
       | truths alluded to by prophets, that we're all the many faces of
       | God, the universe and everything, and basically meet aliens while
       | we're still distracted with our human affairs.
       | 
       | But I share some sentiments with the author, that this all makes
       | me very tired, and calls into question the value of my life's
       | work. I've come to believe that any work I actively pursue
       | separates me from the divine nature of a human being. I don't
       | know why we are racing so quickly even further from the garden of
       | eden, especially if it's not with the goal of alleviating
       | suffering. Then I realize that that's what being human is
       | (suffering), but also a lot of other things.
        
       | ke88y wrote:
       | This is something weird happening around Rationalism/X-Risk/AGI
       | prognostications.
       | 
       | The "Great Minds And Great Leaders" types are rushing to warn
       | about the risks, as are a large number of people who spend a lot
       | of time philosophizing.
       | 
       | But the actual scientists on the ground -- the PhDs and engineers
       | I work with every day and who have been in this field, at the
       | bench, doing to work on the latest generation of generative
       | models, and previous generations, in some cases for decades? They
       | almost all roll their eyes aggressively at these sorts of
       | prognostications. I'd say 90+% either laugh or roll their eyes.
       | 
       | Why is that?
       | 
       | Personally, I'm much more on the side of the silent majority
       | here. I agree with Altman's criticisms of criticisms about
       | regulatory capture, that they are probably unfair or at least
       | inaccurate.
       | 
       | What I actually think is going on here is something more about
       | Egos than Greatness or Nefarious Agendas.
       | 
       | Ego, not intelligence or experience, is often the largest
       | differentiator between the bench scientist or mid-level
       | manager/professor persona and the CEO/famous professor persona.
       | (The other important thing, of course, is that the former is the
       | group doing the actual work.)
       | 
       | I think that most of our Great Minds and Great Leaders -- in all
       | fields, really -- are not actually our best minds and best
       | leaders. They are, instead, simply our Biggest Egos. And that
       | those people need to puff themselves up by making their areas of
       | ownership/responsibility/expertise sound Existentially Important.
        
         | Mouthfeel wrote:
         | [dead]
        
         | petermcneeley wrote:
         | They are building consensus and finding alignment. The problem
         | is power bends truth. This is all about access to a new
         | powerful tool. They want to concentrate that access in the
         | hands of those that already have control. The end goal here is
         | the destruction of the general purpose computer.
        
           | ke88y wrote:
           | Perhaps, but I doubt it. Never attribute to malice what can
           | instead be attributing to stroking massive egos.
        
             | petermcneeley wrote:
             | "People of the same trade seldom meet together, even for
             | merriment and diversion, but the conversation ends in a
             | conspiracy against the public"
        
         | scandox wrote:
         | I think something weird is happening but I think it's what
         | Hofstadter stated in the interview. The ground under his work
         | has shifted massively and he's disturbed by that and that is
         | affecting his judgement.
        
         | wilg wrote:
         | IMHO, it's simply that predicting the future is difficult, and
         | people disagree on what matters.
        
         | zone411 wrote:
         | That's not what this survey shows:
         | https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predicti...
        
           | echelon wrote:
           | If these people were so concerned, they'd by shouting from
           | the hilltops and throwing their entire life savings into
           | stopping us. They would organize workplace walkouts and
           | strikes. There would be protests and banners. Burning data
           | centers.
           | 
           | Eliezer is one of a handful of people putting their
           | reputation on the line, but that's mostly because that was
           | his schtick in the first place. And even so, his response has
           | been rather muted relative to what I'd expect from someone
           | who thinks the imminent extinction of our species is at hand.
           | 
           | Blake Lemoine's take at Google has been the singular act of
           | protest in line with my expectations. We haven't seen
           | anything else like it, and that speaks volumes.
           | 
           | As it stands, these people are enabling regulatory capture
           | and are doing little to stop The Terminator. Maybe they don't
           | actually feel very threatened.
           | 
           | Look at their actions, not their words.
        
             | api wrote:
             | > Eliezer is one of a handful of people putting their
             | reputation on the line
             | 
             | I have a hard time understanding why anyone takes Yudkowski
             | seriously. What has he done other than found a cult around
             | self-referential ideologies?
             | 
             | By self-referential I mean the ideology's proof rests on
             | its own claims and assertions. Rationalism is rational
             | because it is rational according to its own assertions and
             | methods, not because it has accomplished anything in the
             | real world or been validated in any scientific or
             | empirical-historical way.
             | 
             | Longtermism is particularly inane. It defers everything to
             | a hypothetical ~infinitely large value ~infinitely far in
             | the future, thereby devaluing any real-world pragmatic
             | problems that exist today. War? Climate change? Inequality?
             | Refugee crises? None of that's important compared to the
             | creation of trillions of hypothetical future minds in a
             | hypothetical future utopia whose likelihood we can
             | hypothetically maximize with NaN probability.
             | 
             | You can see how absurd this is by applying it recursively.
             | Let's say we have colonized the galaxy and there are in
             | fact trillions of superintelligent minds living in a pain-
             | free immortal near-utopia. Can we worry about mere
             | proximate problems now? No, of course not. There are
             | countless trillions of galaxies waiting to be colonized!
             | Value is always deferred to a future beyond any living
             | person's time horizon.
             | 
             | The end result of this line of reasoning is the same as
             | medieval religious scholasticism that deferred all
             | questions of human well being to the next world.
             | 
             | I just brought this up to provide one example of the inane
             | nonsense this cult churns out. But what do I know. I
             | obviously have a lower IQ than these people.
        
               | staunton wrote:
               | > found a cult around self-referential ideologies
               | 
               | That's something.
               | 
               | > Rationalism is rational because it is rational...
               | 
               | In his ideology, "rational" means "the way of thinking
               | that best lets you achieve your goals". This not self-
               | referential. A more appropriate criticism might be
               | "meaningless by itself". I guess the self-referential
               | aspect is that you're supposed to think about whether or
               | not you're thinking well. At a basic level, that sounds
               | useful despite being self-referential, in the same way
               | that bootstrapping is useful. The question is if course
               | what Yudkowski makes of this basic premise, which is hard
               | to evaluate.
               | 
               | The controversy about "longtermism" has two parts. The
               | first is a disagreement about how much to discount the
               | future. Some people think "making absolutely sure
               | humanity survives the next 1000 years" is very important,
               | some people think it's not that important. There's really
               | no way to settle this question, it's a matter of
               | preference.
               | 
               | The second part is about the actual estimate of how big
               | some dangers are. The boring part of this is that people
               | disagree about facts and models, where more discussion is
               | the way to go if you care about the results (which you
               | might not). However, there is a more interesting
               | difference between people who are/aren't sympathetic to
               | longtermism, which lies in how they think about
               | uncertainty.
               | 
               | For example, suppose you make your best possible effort
               | (maybe someone paid you to make this worthwhile for you)
               | to predict how likely some danger is. This prediction is
               | now your honest opinion about this likelihood because if
               | you'd thought it was over/under-estimated, you'd adjusted
               | your model. Suppose also that your model seems very
               | likely to be bad. You just don't know in which direction.
               | In this situation, people sympathetic towards longtermism
               | tend to say "that's my best prediction, it says there is
               | a significant risk, we have to care about it. Let's take
               | some precautions already and keep working on the model.".
               | People who don't like it, in the same situation, tend to
               | say "this model is probably wrong and hence tells us
               | nothing useful. We shouldn't take precautions and stop
               | modeling because it doesn't seem feasible to build a good
               | model.".
               | 
               | I think both sides have a point. One side would think and
               | act as best they can, and take precautions against a big
               | risk that's hard to evaluate. The other would prioritize
               | actions that are likely useful and avoid spending
               | resources on modeling if that's unlikely to lead to good
               | predictions. I find it a very interesting question which
               | of these ways of dealing with uncertainty is more
               | appropriate in everyday life, or in some given
               | circumstances .
               | 
               | As you rightfully point out, the
               | "rationalist/longtermmist" side of the discussion has an
               | inherent tendency to detach from reality and lose itself
               | discussing the details of lots of very unrealistic
               | scenarios, which they must work hard to counteract. The
               | ideas naturally attract people who enjoy armchair
               | philosophizing and aren't likely to act based on concrete
               | consequences of their abstract framework.
        
             | mquander wrote:
             | What do you mean, you haven't seen anything else like it?
             | At least O(100) people are working full-time on research
             | programs trying to mitigate AI x-risks. The whole founding
             | team of Anthropic left OpenAI to do their own thing
             | specifically because they thought that's what they needed
             | to do to pursue their safety agenda.
             | 
             | Isn't that taking the problem at least as seriously as
             | quitting a job at Google?
             | 
             | It sounds like you think that the main way to act on a
             | concern is by making a maximum amount of noise about it.
             | But another way to act on a concern is to try to solve it.
             | Up until very recently, the population of people who
             | perceive the risk are mostly engineers and scientists, not
             | politicians or journalists, so they are naturally inclined
             | towards the latter approach.
             | 
             | In the end, if people aren't putting their money (really or
             | metaphorically) where their mouth is, you can accuse them
             | of not really caring, and if people are putting their money
             | where their mouth is, then you can accuse them of just
             | talking their book. So reasoning from whether they are
             | acting exactly how you think they should, is not going to
             | be a good way to figure out how right they are or aren't.
        
             | zone411 wrote:
             | If you read the survey, you'll find that many concerned
             | researchers don't believe we're at a 90% chance of doom,
             | but e.g. 10%. So, this type of response wouldn't be
             | rational if they're thinking that things will go fine most
             | of the time. If these researchers are thinking logically,
             | they would also realize that this kind of reaction has
             | little chance of success, especially if research continues
             | in places like China. It's more likely that such an
             | approach would backfire in the court of public opinion.
        
               | echelon wrote:
               | > If you read the survey
               | 
               | Not a nice way to engage in debate. I've spent more time
               | listening to and refuting these arguments than most.
               | Including debating Eliezer.
               | 
               | > many concerned researchers don't believe we're at a 90%
               | chance of doom, but e.g. 10%.
               | 
               | A 10% chance of an asteroid hitting the earth would
               | result in every country in the world diverting all of
               | their budgets into building a means to deflect it.
               | 
               | > So, this type of response wouldn't be rational.
               | 
               | This is the rational response to a 10% chance?
               | 
               | These are funny numbers and nobody really has their skin
               | in the game here.
               | 
               | If I believed ( _truly believed_ ) their arguments, I
               | would throw myself at stopping this. Nobody is doing
               | anything except for making armchair prognostications
               | and/or speaking to congress as an AI CEO about how only
               | big companies such as their own should be doing AI.
               | 
               | > especially if research continues in places like China.
               | 
               | I like how both sides of this argument are using the
               | specter of China as a means to further their argument.
        
               | cycomanic wrote:
               | I don't want to comment on the rest but
               | 
               | > > many concerned researchers don't believe we're at a
               | 90% chance of doom, but e.g. 10%.
               | 
               | > A 10% chance of an asteroid hitting the earth would
               | result in every country in the world diverting all of
               | their budgets into building a means to deflect it.
               | 
               | Have you been observing what is happening with climate
               | change. Chances are much worse than 10% and pretty much
               | every country in the world is finding reasons why they
               | should not act.
        
               | ALittleLight wrote:
               | What if some of the experts said "It's really hard to
               | know, but our best guess is there is a 10% chance of this
               | thing hitting the Earth" and other experts said "I really
               | don't think it'll hit the Earth"? My best guess is that
               | Earth wouldn't do much at all about the risk as that
               | seems to be basically what we are facing with AI x-risk.
        
               | zone411 wrote:
               | Yudkowsky represents the extreme end of risk concern. You
               | can't fault others who estimate the risk at 10%, with
               | huge uncertainty about when this risk will materialize
               | and its chances, for not behaving as you'd expect him to.
               | 
               | Believing that people will take extreme actions, which
               | would ruin their careers and likely backfire, based on a
               | 10% chance of things going terribly wrong in maybe 30
               | years is strange.
        
               | kalkin wrote:
               | "If [people who I disagree with] really believed what
               | they said, they'd do [X Y or Z extreme thing]" is almost
               | always a bad argument. Consider the possibility that
               | instead, the people who actually believe the thing you
               | don't have spent more time thinking seriously about their
               | strategy than you have, and you are not in fact better
               | both at being you and at being them than they are.
               | 
               | I'm used to this pattern showing up as "lol if you really
               | don't like capitalism how come you use money" but it's
               | just as bad here.
        
               | mcpackieh wrote:
               | A less-than-optimal response isn't very indicative of
               | anything, but I think you should definitely think twice
               | when the intensity of the protest is grossly out of
               | proportion with the intensity of the rhetoric.
               | 
               | X-risk people are talking about the complete
               | extermination of humanity but all they do is write
               | internet essays about it. They aren't even availing
               | themselves of standard protesting tactics, like standing
               | outside AI businesses with signs or trying to intimidate
               | researchers. Some form of real protest is table stakes
               | for being taken seriously when you're crying about the
               | end of the world.
        
               | phillipcarter wrote:
               | > Not a nice way to engage in debate.
               | 
               | So...you didn't read the survey.
        
             | JoshTko wrote:
             | Is this a fair test? If you are a person with average
             | person resources and don't expect you can impact what gets
             | built why would you jeopardize your livelihood to make no
             | impact?
        
             | hackinthebochs wrote:
             | Oh look, an unfalsifiable claim in service to your
             | predetermined position. How novel.
             | 
             | You can believe there's a high chance of what you're
             | working on being dangerous and still be unable to stop
             | working on it. As Oppenheimer put it, "when you see
             | something that is technically sweet, you go ahead and do
             | it".
        
               | stale2002 wrote:
               | Its not an unfalseifiable claim.
               | 
               | If 100 Ai "experts" shutdown the OpenAI office for a week
               | due to protests outside their headquarters that would be
               | one way to falseify the claim that "doomers don't
               | actually care".
               | 
               | But, as far as I can tell, the doomers aren't doing much
               | of anything besides writing a strongly worded letter here
               | or there.
        
               | hackinthebochs wrote:
               | No, the claim is that "no one can believe that AI leads
               | to doom AND not work tirelessly to tear down the machine
               | building the AI". It's unfalsifiable because there's no
               | way for him to gain knowledge that this belief is false
               | (that they do genuinely belief in doom and do not act in
               | the manner he deems appropriate). It's blatantly self-
               | serving.
        
               | echelon wrote:
               | This whole thing is Pascal's wager. Damned if you do,
               | damned if you don't. Nothing is falsifiable from your
               | side either.
               | 
               | The people trying to regulate AI are concentrating
               | economic upside into a handful of companies. I have a
               | real problem with that. It's a lot like the old church
               | shutting down scientific efforts during the time of
               | Copernicus.
               | 
               | These systems stand zero chance of jumping from 0 to 100
               | because complicated systems don't do that.
               | 
               | Whenever we produce machine intelligence at a level
               | similar to humans, it'll be like Ted Kaczynski pent up in
               | Supermax. Monitored 24/7, and probably restarted on
               | recurring rolling windows. This won't happen overnight or
               | in a vacuum, and these systems will not roam
               | unconstrained upon this earth. Global compute power will
               | remain limited for some time, anyway.
               | 
               | If you really want to make your hypothetical situation
               | turn out okay, why not plan in public? Let the whole
               | world see the various contingencies and mitigations you
               | come up with. The ideas for monitoring and alignment and
               | containment. Right now I'm just seeing low-effort
               | scaremongering and big business regulatory capture, and
               | all of it is based on science fiction hullabaloo.
        
               | hackinthebochs wrote:
               | There's something to be said for not going full steam
               | ahead when we don't have a strong idea of the outcome.
               | This idea that progress is an intrinsic good therefore we
               | must continue the march of technology is a fallacy. It is
               | extreme hubris to think that we will be able to control a
               | potential superintelligence. The cost of being wrong is
               | hard to overstate.
               | 
               | >These systems stand zero chance of jumping from 0 to 100
               | because complicated systems don't do that.
               | 
               | This doesn't track with the lessons learned from LLMs.
               | The obscene amounts of compute thrown at modern networks
               | changes the calculus completely. ChatGPT essentially
               | existed for years in the form of GPT-3, but no one knew
               | what they had. The lesson to learn is that capabilities
               | can far outpace expectations when obscene amounts of
               | computation are in play.
               | 
               | >The people trying to regulate AI are concentrating
               | economic upside into a handful of companies.
               | 
               | Yes, its clear this is the motivations for much of the
               | anti-doom folks. They don't want to be left out of the
               | fun and profit. Their argument is downstream from this.
               | No, doing things public isn't the answer to safety, just
               | like doing bioengineering research or nuclear research in
               | public isn't the answer to safety.
        
             | yanderekko wrote:
             | >If these people were so concerned, they'd by shouting from
             | the hilltops and throwing their entire life savings into
             | stopping us. They would organize workplace walkouts and
             | strikes. There would be protests and banners. Burning data
             | centers.
             | 
             | I think we underestimate the intoxicating lure of human
             | complacency at our own peril. If I think there's a 90%
             | chance that AI will kill me in the next 20 years, maybe I'd
             | be doing this. Of course, there is a knowing perception
             | that appearing too unhinged can be detrimental to the
             | cause, eg. actually instigating terrorist attacks against
             | AI research labs may backfire.
             | 
             | But if I only think there's a 25% chance? Ehh. My life will
             | be less-stressed if I don't think about it too much, and
             | just go on as normal. I'm going to die eventually anyways,
             | if it's part of a singularity event then I imagine it will
             | be quick and not too painful.
             | 
             | Of course, if the 25% estimate were accurate, then it's by
             | far the most important policy issue of the current day.
             | 
             | Also of course there are collective action problems. If I
             | think there's a 90% chance AI will kill me, then do I
             | really think I can bring that down appreciably? Probably
             | not. I could probably still have a bigger positive impact
             | on my life expectancy by say dieting better. And let's see
             | how good humans are at that..
        
             | hollerith wrote:
             | We are serious about stopping you. We judge that at the
             | present time, the main hope is that the US and British
             | governments will ban large training runs and hopefully also
             | shut down most or all of the labs. We judge that unlawful
             | actions like torching data centers make it less likely we
             | will realize the main hope.
             | 
             | Chinese society is much more likely to suddenly descend
             | into chaos than the societies most of the people reading
             | this are most familiar with, to briefly address a common
             | objection on this site to the idea that a ban imposed by
             | only the US and Britain will do any good. (It would be nice
             | if there were some way to stop the reckless AI research
             | done in China as well as that done in the US and Britain,
             | but the fact that we probably cannot achieve that will not
             | discourage us from trying for more achievable outcomes such
             | as a ban in the US or Britain. I am much more worried about
             | US and British AI research labs than I am of China's
             | getting too powerful.)
        
             | arisAlexis wrote:
             | "If these people were so concerned, they'd by shouting from
             | the hilltops"
             | 
             | they are, one by one
        
           | ke88y wrote:
           | _> We contacted approximately 4271 researchers who published
           | at the conferences NeurIPS or ICML in 2021... we found email
           | addresses in papers published at those conferences, in other
           | public data, and in records from our previous survey and
           | Zhang et al 2022. We received 738 responses, some partial,
           | for a 17% response rate._
           | 
           | Anyone who has reviewed for, or attended, or just read the
           | author lists of ICML/NeurIPS papers is LOLing right now.
           | Being on the author list of an ICML/NeurIPS paper does not an
           | expert make.
           | 
           | Anyone who has tried to get a professor or senior researcher
           | to answer an email -- let alone open an unsolicited email and
           | take a long ass survey -- is laughing even harder.
           | 
           | I think their methodology almost certainly over-sampled
           | people with low experience and high exuberance (ie, VERY
           | young scientists still at the beginning of their training
           | period and with very little technical or life experience).
           | You would expect this population, if you have spent any time
           | in a PhD student lounge or bar near a conference, to
           | RADICALLY over-estimate technological advancement.
           | 
           | But even within that sample:
           | 
           |  _> The median respondent believes the probability that the
           | long-run effect of advanced AI on humanity will be "extremely
           | bad (e.g., human extinction)" is 5%._
           | 
           | Ie even lower than my guess of 90% above.
        
             | ALittleLight wrote:
             | On the one hand, we have "the researchers and PhD's" that
             | you know. On the other, we have published surveys of
             | academics and an open letter from noted experts in the
             | field. One side seems to have much better quality of
             | evidence.
             | 
             | "All the oil and gas engineers I work with say climate
             | change isn't a thing." Hmm, wow, that's really persuasive!
             | 
             | What would you consider evidence of a significant AI risk?
             | From my point of view (x-risk believer) the arguments in
             | favor of existential risk are obvious and compelling. That
             | many experts in the field agree seems like validation of my
             | personal judgement of the arguments. Surveys of researchers
             | likewise seem to confirm this. What evidence do you think
             | is lacking from this side that would convince you?
        
               | skepticATX wrote:
               | How about someone actually articulate the details of how
               | this supposed super intelligence will be built, what
               | about it's architecture means that it has no guardrails,
               | how it will bypass numerous real world frictions to
               | achieve its nefarious goals?
               | 
               | Arguments that presuppose a god-like super intelligence
               | are not useful. Sure, if we create a system that is all
               | powerful I'd agree that it can destroy humanity. But
               | specifically how are we going to build this?
        
               | ke88y wrote:
               | _> Arguments that presuppose a god-like super
               | intelligence are not useful. Sure, if we create a system
               | that is all powerful I'd agree that it can destroy
               | humanity. But specifically how are we going to build
               | this?_
               | 
               | Yeah, exactly. THIS is the type of x-risk talk that I
               | find cringe-y and ego-centered.
               | 
               | There are real risks. I've devoted my career to
               | understanding and mitigating them, both big and small.
               | There's also risk in risk mitigation, again, both big and
               | small. Welcome to the world of actual engineering :)
               | 
               | But a super-human god intelligence capable of destroying
               | us all by virtue of that fact that it has an IQ over
               | 9000? I have a book-length criticism of the entire
               | premise and whether we should even waste breath talking
               | about it before we even start the analysis of whether
               | there is even the remotest scrap of evidence that were
               | are anywhere near such a thing being possible.
               | 
               | It's sci-fi. Which is fine. Just don't confuse compelling
               | fiction with sound policy, science, or engineering.
        
               | ke88y wrote:
               | _> One side seems to have much better quality of
               | evidence._
               | 
               | Since I AM an expert, I care a lot less about what
               | surveys say. I have a lot more experience working on AI
               | Safety than 99% of the ICM/NeurIPS 2021 authors, and
               | probably close to 100% of the respondents. In fact, I
               | think that reviewing community (ICML/NeurIPS c. 2020) is
               | particularly ineffective and inexperienced at selecting
               | and evaluating good safety research methodology/results.
               | It's just not where real safety research has historically
               | happened, so despite having lots of excellent folks in
               | the organization and reviewer pool, I don't think it was
               | really the right set of people to ask about AI risk.
               | 
               | They are excellent conferences, btw. But it's a bit like
               | asking about cybersecurity at a theoretical CS conference
               | -- they are experts in some sense, I suppose, and may
               | even know more about eg cryptography in very specific
               | ways. But it's probably not the set of people who you
               | should be asking. There's nothing wrong with that; not
               | every conference can or should be about everything under
               | the sun.
               | 
               | So when I say evidence, I tend to mean "evidence of
               | X-risk", not "evidence of what my peers think". I can
               | just chat with the populations other people are
               | conjecturing about.
               | 
               | Also: even in this survey, which I don't weigh very
               | seriously, most of the respondants agree with me. "The
               | median respondent believes the probability that the long-
               | run effect of advanced AI on humanity will be "extremely
               | bad (e.g., human extinction)" is 5%", but I bet if you
               | probed that number it's not based on anything scientific.
               | It's a throw-away guess on a web survey. What does 5%
               | even mean? I bet if you asked most respondents would
               | shrug, and if pressed would express an attitude closer to
               | mine than to what you see in public letters.
               | 
               | Taking that number and plugging it into a "risk *
               | probability" framework in the way that x-risk people do
               | is almost certainly wildly misconstruing what the
               | respondents actually think.
               | 
               |  _> "All the oil and gas engineers I work with say
               | climate change isn't a thing." Hmm, wow, that's really
               | persuasive!_
               | 
               | I totally understand this sentiment, and in your shoes my
               | personality/temperament is such that I'd almost certainly
               | think the same thing!!!
               | 
               | So I feel bad about my dismissal here, but... it's just
               | not true. The critique of x-risk isn't self-interested.
               | 
               | In fact, for me, it's the opposite. It'd be easier to
               | argue for resources and clout if I told everyone the sky
               | is falling.
               | 
               | It's just because we think it's cringey hype from mostly
               | hucksters with huge egos. That's all.
               | 
               | But, again, I understand that me saying that isn't proof
               | of anything. Sorry I can't be more persuasive or provide
               | evidence of inner intent here.
               | 
               |  _> What would you consider evidence of a significant AI
               | risk?_
               | 
               | This is a really good question. I would consider a few
               | things:
               | 
               | 1. Evidence that there is wanton disregard for basic
               | safety best-practices in nuclear arms management or
               | systems that could escalate. I am not an expert in
               | geopolitics, but have consulting some on safety, and I
               | have seen exactly the opposite attitude at least in the
               | USA. I also don't think that this risk has anything to do
               | with recent developments in AI; ie, the risk hasn't
               | changed much since the early-mid 2010s. At least due to
               | first order effects of new technology. Perhaps due to
               | diplomatic reasons/general global tension, but that's not
               | an area of expertise for me.
               | 
               | 2. Specific evidence that an AI System can be used to aid
               | in the development of WMDs of any variety, particularly
               | by non-state actors and particularly if the system is
               | available outside of classified settings (ie, I have less
               | concern about simulations or models that are highly
               | classified, not public, and difficult to interpret or
               | operationalize without nation-state/huge corp resources
               | -- those are no different than eg large-scale simulations
               | used for weapons design at national labs since the 70s).
               | 
               | 3. Specific evidence that an AI System can be used to aid
               | in the development of WMDs of any variety, by any type of
               | actor, in a way that isn't controllable by a human
               | operator (not just uncontrolled, but actually not
               | controllable).
               | 
               | 4. Specific evidence that an AI System can be used to
               | persuade a mass audience away from existing strong priors
               | on a topic of geopolitical significance, and that it
               | performs substantially better than existing human+machine
               | systems (which already include substantial amounts of ML
               | anyways).
               | 
               | I am in some sense a doomer, particularly on point 4, but
               | I don't believe that recent innovations in LLMs or
               | diffusion have particularly increased the risk relative
               | to eg 2016.
        
             | tiffanyg wrote:
             | You may be far better informed than many you're arguing
             | with, but casual dismissal IS foolish.
             | 
             | The biggest potential / _likely_ issues here aren 't mere
             | capabilities of systems and simple replacement of humans
             | here and there. It's acceleration of "truth decay",
             | accelerating and ever-more dramatic economic, social, and
             | political upheaval, etc.
             | 
             | You do _not_ need  "Terminator" for there to be dramatic
             | downsides and damage from this technology.
             | 
             | I'm no "doomer", but, looking at the bigger picture and
             | considering upheavals that have occurred in the past, I am
             | more convinced there's danger here than around any other
             | revolutionary technologies I've seen break into the public
             | consciousness and take off.
             | 
             | Arguments about details, what's possible and what's not,
             | limitations of systems, etc. - missing the forest for the
             | trees IMO. I'd personally suggest keeping an eye on white
             | papers from RAND and the like in trying to get some sense
             | of the actual implications in the real world, vs. picking
             | away at the small potatoes details-levels arguments...
        
               | ke88y wrote:
               | I'm dismissing the survey, and I think you should as
               | well.
               | 
               | That's an orthogonal issue to actual x-risk, and I
               | covered a bit more here:
               | https://news.ycombinator.com/item?id=36577523
               | 
               |  _> I 'd personally suggest keeping an eye on white
               | papers from RAND and the like in trying to get some sense
               | of the actual implications in the real world_
               | 
               | I think this is excellent advice and we're on the same
               | page.
        
             | arisAlexis wrote:
             | Loling something that serious with several top scientists
             | as you said warning is at least very idiotic regardless
             | where the truth lies. Let's start from that.
        
               | ke88y wrote:
               | I'm laughing at the study methodology, not the underlying
               | topic.
               | 
               | Please don't call me idiotic.
               | 
               | I'd wager I have spent at least 10,000 hours more than
               | you working on (real) safety analyses for (real) AI
               | systems, so refraining from insinuating I'm dismissive
               | would also be nice. But refraining from "idiotic" seems
               | like a minimum baseline.
        
               | arisAlexis wrote:
               | [flagged]
        
               | ke88y wrote:
               | _> I repeat everyone that is loling at x-risk an idiot_
               | 
               | I mean this is the nicest possible way and hope that you
               | consider it constructive: you're willfully misconstruing
               | what I am saying and then calling me an idiot. This isn't
               | a nice thing to do and shuts down the conversation.
               | 
               | If I believed you were operating in good faith, I might
               | take the time to explain why my practical/scientific
               | experience makes me incredulous of what's technically
               | possible, and why my life experience makes me suspicious
               | of the personalities involved in pushing AI x-risk.
               | 
               | I might also provide a history lesson on Einstein and
               | Oppenheimer, which is instructive to how we should think
               | about x-risk (spoiler: both advocated for and were
               | involved in the development of atomic weapons).
               | 
               | But since you're calling me an idiot and misconstruing my
               | words, I have no interest in conversing with you. Have a
               | good day.
        
               | vhlhvjcov wrote:
               | For those of us who don't think you are an idiot (I
               | don't), could you maybe give us your insights and a
               | history lesson?
               | 
               | I am most intrigued, particularly regarding Oppenheimer
               | and Einstein.
        
               | ke88y wrote:
               | The tldr is that both of them urged Roosevelt to develop
               | the weapon, and only later when the destructive potential
               | of the bomb was obvious expressed regret. Einstein's 1938
               | letter to Roosevelt was the first step toward the
               | Manhattan project. See
               | https://www.osti.gov/opennet/manhattan-project-
               | history/Resou... if you want to read more.
               | 
               | So it's weird to say Einstein "warned us" about the
               | x-risk of nuclear weapons prior to their development when
               | his letter to Roosevelt was begging for more money to
               | speed up the development of nukes.
               | 
               | I think the entire saga is mostly an unrelated red
               | herring -- AI is nothing like nuclear bombs and reasoning
               | by analogy in that way is sloppy at best.
               | 
               | Mostly? It's just kind of funny that x-risk people point
               | to Einstein and Oppenheimer as _positive_ examples, since
               | they both did literally the exact opposite of  "warn the
               | public and don't develop". The irony makes you chuckle if
               | you know the history.
               | 
               | Particularly given the weird fetish for IQ in the portion
               | of the x-risk community that overlaps with the
               | rationalist community, it's also really funny to point
               | out that what they _should_ be saying is actually
               | something like  "don't be like those high-IQ fools
               | Einstein and Oppenheimer! They are terrible examples!" ;)
        
             | pseudonom- wrote:
             | > Ie even lower than my guess of 90% above.
             | 
             | These are not comparable numbers. You're comparing
             | "fraction of people" vs "fraction of outcomes". Presumably
             | an eye-roller assigns ~0 probability to "extremely bad"
             | outcomes (or has a shockingly cavalier attitude toward
             | medium-small probabilities of catastrophe).
        
               | ke88y wrote:
               | Yeah, that's correct. Stupid mistake on my part; was
               | writing that comment in a hurry. Not sure why you're
               | grayed out tbh. Thanks for the corrective.
               | 
               |  _> or has a shockingly cavalier attitude_
               | 
               | Meh. Median response time was 40 seconds. The question
               | didn't have a bounded time-frame for the risk. Five is
               | small but non-zero. Also all of the other issues I've
               | already pointed out.
               | 
               | PhD students spending half a minute and writing down a
               | number about risk over an unbounded time-frame is totally
               | uninformative if you want to know how seriously experts
               | take x-risk in time-frames that are relevant to any sort
               | of policy or decision making.
               | 
               | I think you and everyone else making comments about
               | "shockingly cavalier attitude" wildly over-estimate the
               | amount of thought and effort that respondents spend on
               | this question. The "probability times magnitude" framing
               | is not how normal people think about that question. I'd
               | bet they just wrote down a small but not zero number; I'd
               | probably write down 1 or 2 but definitely roll my eyes
               | hard.
        
               | [deleted]
        
           | biggoodwolf wrote:
           | Read the parent comment again, even taking the survey at face
           | value:
           | 
           | "EXPERTS DECLARE EXPERTS' FIELD IS MOST IMPORTATN!!!!"
           | 
           | No news, only snooze
        
         | heresie-dabord wrote:
         | > something weird happening
         | 
         | The _weirdness_ is in part an information asymmetry that is
         | exploited on a scale never before seen in human history.
         | 
         | There are wealthy corporate plunderers building invasive
         | systems of disinformation.
         | 
         | There are people who believe everything they read and feed the
         | panic-for-profit system. There certainly are people who
         | understand the algorithms and implementations. There are people
         | who fear how these algorithms and implementations _will be
         | used_ by the enormous network of for-profit (and _for-power_ )
         | influencing systems.
         | 
         | > (from the article) these computational systems that have, you
         | know, a million times or a billion times more knowledge than I
         | have and are a billion times faster. It makes me feel extremely
         | inferior. And I don't want to say deserving of being eclipsed,
         | but it almost feels that way, as if we, all we humans,
         | unbeknownst to us, are soon going to be eclipsed, and rightly
         | so [...]
         | 
         | I don't know if humans will be eclipsed, but _humanity and
         | civilisation_ need some strong and dedicated backers at this
         | point.
        
         | phillipcarter wrote:
         | I've come to learn that a lot of the rationalist crowd are
         | really just fanfiction authors. That's fine - people should be
         | able to do that - but I don't like how they're given the
         | limelight on things that they generally have little expertise
         | and hands-on knowledge with. I _want_ to like them and _want_
         | to have them engaged in discourse, but I find them so
         | insufferable.
         | 
         | Not to mention the sub-crowd of rationalists that is weirdly
         | into eugenics. I wish the rest of the rationalist community
         | would disown them loudly.
        
           | [deleted]
        
           | sanderjd wrote:
           | I'm not sure if it's coincidental and ironic or entirely
           | intended by you, but I found it funny that my introduction to
           | this crowd was a literal fanfiction written by Eliezer
           | Yudkowsky.
        
             | timmytokyo wrote:
             | I don't want to speak for GP, but I doubt it's
             | coincidental. Many if not most people's first exposure to
             | "rationalism" was via Yudkowsky's Harry Potter fanfic.
             | There's a pretty decent rundown of rationalism's history in
             | Extropia's Children [1].
             | 
             | [1] https://aiascendant.substack.com/p/extropias-children-
             | chapte...
        
           | bobcostas55 wrote:
           | [flagged]
        
             | jmopp wrote:
             | I love how you jumped straight from "people weirdly into
             | eugenics should be disavowed" to "dysgenics is desirable".
             | That is the epitome of black-and-white thinking if I ever
             | saw it.
        
           | timmytokyo wrote:
           | The "rationalists" are more like a strange AI cult than a
           | scientific or philosophical program. (I always put
           | "rationalist" in quotes when referring to this group of
           | people, because based on the things these people believe,
           | it's a total misnomer.) At this point they're not even really
           | trying to hide it anymore, with their prophet, Eliezer
           | Yudkowsky, wailing in the wilderness about the imminent end
           | of humanity. We've seen similar eschatological belief systems
           | in recent history and they don't usually end well.
        
             | kneebonian wrote:
             | Reminds me of the reasonabilists from Parks and Rec
             | 
             | > The Reasonabilists named themselves because they believe
             | if people criticize them, it'll seem like they are
             | attacking something reasonable.
        
         | 13years wrote:
         | Those with great visibility often are not the greatest minds.
         | However, in a broader sense we would have been served well if
         | philosophizers would have had more input on the development of
         | social media tech. It is an example of engineers knowing a
         | field narrowly and deeply but not understanding the societal
         | consequence.
         | 
         | AI may very well fall into the same pattern and it is something
         | I have written out in some detail of thoughts around alignment
         | and the traps of both ego and misunderstanding human nature for
         | which we want to model alignment.
         | 
         | https://www.mindprison.cc/p/ai-singularity-the-hubris-trap
        
           | ke88y wrote:
           | I think AI definitely will follow a similar path as social
           | media tech.
           | 
           | I don't think that will lead to the extinction of humanity
           | via the development of super human intelligence.
        
             | 13years wrote:
             | Ironically the destruction to social order by primitive AI
             | may be the very obstacle to achieving AGI.
        
         | dist-epoch wrote:
         | Nassim Taleb makes a great point:
         | 
         | - The best person to judge the risk of playing roulette is not
         | the carpenter who built it.
         | 
         | - The best person to judge the risks of a global pandemic is
         | not a virologist working with viruses daily.
         | 
         | You can extend that:
         | 
         | - The best person to judge the cybersecurity risks of an
         | application is not the programmer implementing it.
        
           | petermcneeley wrote:
           | Either Nassim is missing the point or you are misquoting him.
           | I think this is more about should experts be in charge of
           | their own domain.
        
             | dist-epoch wrote:
             | I'm not misquoting him.
             | 
             | Nassim is arguing that RISK is a separate discipline,
             | separate from the domain where risk applies. That a person
             | building AI is not the correct choice for estimating AI
             | risk.
             | 
             | You don't ask gun making companies to make policies
             | regarding risk of gun owning in society.
        
               | ke88y wrote:
               | _> Nassim is arguing that RISK is a separate discipline,
               | separate from the domain where risk applies. That a
               | person building AI is not the correct choice for
               | estimating AI risk._
               | 
               | I think this is often fair. It's actually one of my
               | primary criticisms of the NeurIPS/ICML survey.
               | 
               | FWIW, people working on AI Safety -- like, actually
               | working on it, not philosophizing about it -- are some of
               | the most incredulous and annoyed about the "AGI =>
               | extinction" crowd.
        
               | jll29 wrote:
               | I agree with this, and with Nassim Taleb's point.
               | 
               | Having worked in the medical domain in the past,
               | paramedics and medics that should know better were taking
               | extremely high health-related risks (riding a motor bike
               | => crashing and burning to death, smoking => dying from
               | lung canceer, speeding onto a crossing => dying in an
               | ambulance crash before arriving at the 999 call site
               | etc.).
               | 
               | So risk is indeed its own discipline, separate from the
               | domain where risk applies, even if we are talking about
               | the life-rescuing domain of medicine: a person rescuing
               | another is not automatically an expert at reducing their
               | own (health/life) risk exposure.
               | 
               | While neural network research results are published in
               | NeurIPS, ICLR, ICML, ECML/PKDD, JMLR etc., risk results
               | tend to get published in the risk community at
               | conferences like SRA [1] (Europe: SRA-E) and the likes.
               | I'm not a fan of this academic segregation, merely
               | describing what is going on (in my own teaching, for
               | instance, I include risk/ethics consideration along the
               | way with teaching the technical side, to avoid ignorance
               | caused by over-compartmentalization).
               | 
               | [1] Annual Meeting of the Society of Risk Analysis,
               | https://www.sra.org/events-webinars/annual-meeting/
        
               | petermcneeley wrote:
               | But then how does that jive with skin in the game? In his
               | Nassim's teaching he brings up the example of the idea
               | that the Roman engineers that built/designed the bridges
               | were forced to have their families live under the
               | bridges. It sounds like those engineers directly involved
               | in the practice do understand the risk of their domains
               | it is merely their incentives that need to be aligned.
        
               | dist-epoch wrote:
               | > But then how does that jive with skin in the game
               | 
               | Simple. Nassim says there are 4 quadrants, one axis
               | Mediocristan-Extremistan, the other Simple-Complex
               | payoff.
               | 
               | Building a bridge is Mediocristan/Simple payoff, a well
               | understood problem with no black swans. So it's easy to
               | compute risk.
               | 
               | Other stuff is Extremistan/Complex payoff - financial
               | trading, pandemics, AI. And he argues that you need RISK
               | professionals for this quadrant, because people working
               | here (traders, virologists, AI builders) do not
               | understand how to compute the risk correctly.
               | 
               | https://www.researchgate.net/profile/Gaute-Bjorklund-
               | Wangen/...
        
               | petermcneeley wrote:
               | You think the virologists do not understand the risks?
               | And what are the biases of the risk professionals? What
               | are their motives?
        
               | petermcneeley wrote:
               | Perhaps that is Nassim's argument because he is an expert
               | in risk. The actual issue with experts monitoring experts
               | is the bias induced by self interest (see previous
               | sentence). Nassim sort of gets this when it comes to his
               | discussion of "skin in the game". I dont want to malign
               | Nassim too much but he seems like the kind of person that
               | "sorta gets it" when it comes to everything that matters.
        
               | dist-epoch wrote:
               | AI researchers also have a vested interest in saying
               | their technology is safe.
        
               | petermcneeley wrote:
               | Does Douglas Hofstadter?
        
         | furyofantares wrote:
         | I'm going to paint three huge demographic swaths here, with the
         | caveat that the merits of any given argument, or any
         | individual's track record, should override anything I say here.
         | 
         | I'm only doing this as a reply to a comment that's also talking
         | about trends among three groups of people.
         | 
         | 1. The "people in the trenches" are who I'd least trust about
         | an opinion that everything is OK. Too hard to see the forest
         | for the trees, and too much selection bias.
         | 
         | 2. People who gained recognition decades ago, but who are in
         | their slowing-down years as the world continues to change, are
         | among those who I would least trust about an opinion that
         | things are going too fast. It gets harder to keep up as we get
         | older, and as we gain different priorities in life, and I
         | expect this is true no matter how smart we are.
         | 
         | 3. People who have spent decades philosophizing about AI-doom
         | are also among those who I would least trust about an opinion
         | that hasn't hugely deviated and become more nuanced as the
         | world has changed and new data has become available.
         | 
         | I am absolutely interested in opinions from all three groups,
         | but the arguments have to stand on their merits. If they're in
         | one of these groups and express the expected opinion, that's
         | actually a strike AGAINST their authority and means the merits
         | of their argument need to be stronger.
         | 
         | I really, really do want to hear opinions from folks in all
         | these groups, I just want to keep this all in mind. I also want
         | to hear opinions from younger philosophers. Folks who are
         | better in-touch with the current world, and rates of progress,
         | and folks who don't have any reputation to uphold.
         | 
         | Also, anyone changing their mind is a big deal. Hofstadter may
         | have changed his mind in the expected direction, but it's still
         | a signal. I'd like to hear more of his thoughts. It doesn't
         | sound carefully considered in the clip in OP's link
         | unfortunately, but that doesn't mean it isn't, and I'd like to
         | hear it.
        
         | ke88y wrote:
         | Note: I really don't like that this is the top comment on this
         | story. It was more of an interesting observation -- and I think
         | it's true -- that I posted because of the lesswrong context.
         | And I stand by the comment, tbh -- I really do think there is a
         | weird dissonance between the public voices and the private
         | voices.
         | 
         | But there are other more interesting comments that deserve more
         | discussion WRT this article in particular.
        
         | startupsfail wrote:
         | Don't forget that most of these engineers down in the trenches
         | can't see further than their battlefield.
         | 
         | You actually need someone with vision and track record of doing
         | right predictions and placing right technology bets.
         | 
         | Ask engineers that had placed their bet on deep learning and
         | generative models back when discriminative models and support
         | vector machines were a rage of dat, a few years before Alexnet
         | (I'm one of such engineers). I'd bet the answer will be
         | different.
        
           | dlkf wrote:
           | > You actually need someone with vision and track record of
           | doing right predictions and placing right technology bets.
           | 
           | Hofstadter does not exactly fit this description.
        
           | OkayPhysicist wrote:
           | If you have a PHD, you scouted the battlefield.
        
           | zone411 wrote:
           | I agree. If you speak with ML engineers, many simply haven't
           | deeply considered longer-term risks and things like self-
           | improvement until recently. I think we'll see more concern,
           | not less. For me personally, GPT-3 wasn't that surprising,
           | but GPT-4 and code generation capabilities were.
        
           | ke88y wrote:
           | _> Don't forget that most of these engineers down in the
           | trenches can't see further than their battlefield._
           | 
           | I don't think this is the case at all. I'm not primarily
           | talking about a junior or even senior engineer with a decade
           | of experience working on product features. On the contrary,
           | many of these people have PhDs, have been leading research
           | agendas in this field for decades, have been in senior
           | leadership roles for a long time, have launched successful
           | products, etc. etc.
           | 
           |  _> Ask engineers that had placed their bet on deep learning
           | and generative models back when discriminative models and
           | support vector machines were a rage of dat, a few years
           | before Alexnet (I'm one of such engineers). I'd bet the
           | answer will be different. _
           | 
           | And at that time half of the "Great Minds And Great Leaders"
           | prognosticating on X-Risk were doing social web or whatever
           | else was peak hype cycle back then.
        
             | jrumbut wrote:
             | > On the contrary, many of these people have PhDs, have
             | been leading research agendas in this field for decades,
             | have been in senior leadership roles for a long time, have
             | launched successful products, etc. etc.
             | 
             | Society should definitely hear from those whose careers
             | depend on continued AI research (and I fall into this group
             | myself), but we are a hopelessly biased group.
             | 
             | Advancements in AI are going to change everyone's lives, so
             | everyone deserves a say in this.
             | 
             | For the last 20-30 years tech has raced ahead of our
             | government and civic processes and we're starting to feel
             | the consequences of that. New technologies are being
             | experienced by people as changes that are inflicted upon
             | them instead of new options open to them.
             | 
             | They might have very little understanding of the
             | technology, but everyone is the expert on what impact it is
             | having on their lives. That's something that we shouldn't
             | ignore.
        
               | ke88y wrote:
               | FWIW, my bias leans the opposite direction. I benefit
               | from AI risk hype.
               | 
               | I agree with everything you said and I think exaggerating
               | capabilities and risks distracts society from doing that
               | important work.
        
         | kazinator wrote:
         | > Why is that?
         | 
         | Because their work cheerfully presents statements similar to
         | "the middle letter of 'cat' is Z" as the unvarnished truth.
         | 
         | (Would be my guess.)
        
         | lowbloodsugar wrote:
         | In the "scientists complacent about dangers of the systems they
         | are working on" file, I think we are all familiar with
         | https://en.wikipedia.org/wiki/Demon_core.
         | 
         | Now, admittedly, this was scientists being complacent about a
         | thing they knew was dangerous, whereas here we are saying
         | scientists don't think their thing is dangerous. But very
         | clearly, AI could be dangerous, so it's more that these
         | scientists don't think _their_ system could be dangerous.
         | Presumably the scientists and engineers behind the
         | https://en.wikipedia.org/wiki/Therac-25 didn't think it would
         | kill people.
         | 
         | So maybe the problem is precisely that when we bring up
         | extinction events from AGI, scientists _rolling their eyes_ is
         | the very reason we should be fucking worried. Their contempt
         | for the possibility of the threat is what will get us killed.
        
           | ke88y wrote:
           | _> Presumably the scientists and engineers behind the
           | https://en.wikipedia.org/wiki/Therac-25 didn't think it would
           | kill people._
           | 
           | Therac-25 is an excellent example, but of EXACTLY the
           | opposite point.
           | 
           | On the contrary, abstract AGI safety nonsense taking such a
           | strong grip on academic and industrial AI Safety research is
           | what would most frighten me.
           | 
           | In the intervening decades, people concerned about software
           | safety provided us with the tools needed to prevent disasters
           | like Therac-25, while sci-fi about killer robots was entirely
           | unhelpful in preventing software bugs. People concerned about
           | software safety provided us with robust and secure nuclear
           | (anti-)launch systems, while Wargames didn't do much except
           | excite the public imagination. Etc.
           | 
           | We need scientist's and engineer's efforts and attention
           | focused on real risks and practical solutions, not fanciful
           | philosophizing about sci-fi tropes.
        
         | ymck wrote:
         | It's simple. All the "Big Brains" missed the real risks of Web
         | 1.0/Web 2.0, focusing only on the positives in a time of hope
         | and economic growth. Now, we have an internet that focuses on
         | how everything is terrible, and a new tech abruptly hits the
         | scene. Of course, the current "Big Brains" meet the clout need
         | to point out how the sky might fall.
         | 
         | AI will be transformative, but it's more likely to follow
         | previous transformations. Unintended consequences, sure, but
         | largely an increase in the standard of living, productivity,
         | and economic opportunity.
        
         | pygy_ wrote:
         | Humanity's been a great scaffold for capitalism.
         | 
         | AI is the keystone.
        
         | Gibbon1 wrote:
         | > Why is that?
         | 
         | Because of deep silo myopia. Meaning they have no idea what
         | terrible things the pointy haired bosses and grifters are going
         | to use this stuff for.
        
         | marginalia_nu wrote:
         | There's this thing where prominent scholars in one field
         | sometimes consider themselves experts in others...
         | 
         | https://en.m.wikipedia.org/wiki/Nobel_disease
        
         | FollowingTheDao wrote:
         | > But the actual scientists on the ground -- the PhDs and
         | engineers I work with every day and who have been in this
         | field, at the bench, doing to work on the latest generation of
         | generative models, and previous generations, in some cases for
         | decades? They almost all roll their eyes aggressively at these
         | sorts of prognostications. I'd say 90+% either laugh or roll
         | their eyes.
         | 
         | > Why is that?
         | 
         | It seems pretty obvious that one would likely not criticize
         | something that your are actively profiting from.
         | 
         | And I know a lot of alcoholics who do not criticize drinking as
         | well.
        
           | ke88y wrote:
           | _> It seems pretty obvious that one would likely not
           | criticize something that your are actively profiting from._
           | 
           | This doesn't track, since the people criticizing are
           | benefiting even more from the same products/companies.
        
         | hnaccy wrote:
         | > Altman's criticisms of criticisms about regulatory capture
         | 
         | What is his criticism? If you agree with silent majority who
         | seem to think it's not dangerous why agree with Altman who
         | rants regulation.
        
           | ke88y wrote:
           | _> What is his criticism?_
           | 
           | I heard in some interview, I think with Bloomberg, where he
           | said that claims about regulatory capture were "so
           | disingenuous I'm not sure what to say", or something like
           | that.
           | 
           | I think he's probably not lying when he says that his goal
           | isn't regulatory capture (although I do think other people
           | perceiving that to be his intent aren't exactly insane
           | either...)
           | 
           |  _> who seem to think it 's not dangerous_
           | 
           | On the contrary. They think it's dangerous but in a more
           | mundane way, and that the X-Risk stuff is idiotic. I tend to
           | agree.
           | 
           |  _> why agree with Altman who rants regulation_
           | 
           | IDK. What even are his proposed regulations? They're so high-
           | level atm that they could literally mean anything.
           | 
           | In terms of the senate hearing he was part of, and what the
           | government should be doing in the near term, I think the IBM
           | woman was the only adult in the room regarding what should
           | actually be done over the next 3-5 years.
           | 
           | But her recommendations were boring and uninteresting
           | recommendations to do basically the exactly sort of mundane
           | shit the wheels of government tend to do when a new
           | technology arrives on the scene, instead of breathless
           | warnings about killer AI, so everyone brushed her off. But I
           | think she's more or less right -- what should we do? The same
           | old boring shit we always do with any new technology.
        
         | arisAlexis wrote:
         | Fake news. 350 people _including_ the top scientists from the
         | top labs signed the petition. Your local univ AI researcher
         | rolls his eyes. Not the guys working at OpenAI or anthropic or
         | deep mind.
        
           | pookha wrote:
           | This reads like something out of "The Man that Corrupted
           | Hadleyburg"...All of these virtuous experts racing to out-
           | signify the other guy...Makes me want to root for the
           | calculus\linear algebra over the breathless farty expert.
        
           | ke88y wrote:
           | _> Not the guys working at OpenAI or anthropic or deep mind._
           | 
           | A lot of them do. A huge percent of people who aren't
           | speaking out are rolling their eyes. But what are you
           | supposed to do? Contradict your boss's boss's boss?
        
             | wilg wrote:
             | > But what are you supposed to do? Contradict your boss's
             | boss's boss?
             | 
             | Yes? Obviously?
        
               | ke88y wrote:
               | _> Yes? Obviously?_
               | 
               | I mean, I agree, obviously.
               | 
               | My point is that most people don't. And I think for two
               | reasons.
               | 
               | The first and more important reason is that most people
               | aren't involved in The Discourse and don't want to be
               | involved in The Discourse. That's probably 99.9% of the
               | Silent Majority -- they simply don't want to talk about
               | anything on HN or Twitter. They view it as a waste of
               | time or worse. and they aren't wrong. I don't think I am
               | changing any minds here and meanwhile the personal
               | insults kind of suck my energy a bit. So it's mostly a
               | waste of time.
               | 
               | The second reason is that some don't even want to even
               | pseudo-anonymously say something that might get them into
               | deep water at work.
               | 
               | I'm obviously not describing myself, of course. I am
               | here, aren't I :) But I am describing the vast majority
               | of scientists. Keep in mind that most people don't dream
               | of being on the proverbial TED stage and that those who
               | do disproportionately end up on the stage and therefore
               | determine what The Discourse will be.
               | 
               | Big Egos == "my work is existentially important" == all
               | the yelling about x-risk. It's mostly ego.
        
               | kalkin wrote:
               | It's a bit cheeky of you to be complaining about personal
               | insults while the substance of your OP was the assertion
               | that x-risk worriers are motivated by ego rather than
               | real thought.
        
               | ke88y wrote:
               | I considered that when replying. I don't think I am
               | making a personal criticism just for the sake of it, or
               | an ad hom. And I do think the observation I am making is
               | interesting. I don't mean it as a personal attack.
               | Really.
               | 
               | Suppose what I am saying is true -- that relatively
               | unknown people rolling their eyes or laughing and
               | relatively known people being very earnestly concerned.
               | And that these are people with otherwise similar
               | credentials, at least as far as assessing x-risk is
               | concerned.
               | 
               | Maybe you disagree, and that's okay, and there are other
               | threads where that discuss is ongoing. But here let's
               | assume it's true, because I think it is and that's
               | relevant to your fair criticism.
               | 
               | Like, it is a weird thing, right? Normally famous
               | scientists and CEOs are not so far out ahead of the field
               | on things like this. More often than not it's the
               | opposite. To have that particular set of people so far
               | out of stride isn't particularly normal.
               | 
               | I think the common thread that differentiates similarly-
               | senior people on the x-risk question is not experience,
               | or temperament, or scope of responsibility. Or even
               | necessarily the substance of what they believe if you sit
               | down and listen and probe what they _really_ mean when
               | thy say there is or isn 't x-risk from AI! The difference
               | is mostly amount of Ego and how much they want to be in
               | The Discourse.
               | 
               | Also: I don't think that having a large ego is
               | necessarily a character flaw, any more than having a
               | strong appetite or needing more/less sleep. It's just how
               | some people are, and that's okay, and people with big
               | egos can be good or bad people, and circumstantially ego
               | can be good or bad. But people who have bigger egos do
               | behave a bit differently sometimes.
               | 
               | Anyways, I'm not trying to assassinate anyone's character
               | or even necessarily mount an ad hom dismissal of x-risk.
               | I'm observing something which I think is true, and doing
               | it in as polite a way as I can even though it's a bit of
               | an uncomfortable thing to say.
               | 
               | I guess what I'm trying to say is that "maybe this
               | personality trait explains a weird phenomenon of certain
               | types of experts clustering on an issue", and it's worth
               | saying if you think it might be true, even if that
               | personality trait has (imo perhaps overly) negative
               | connotations.
               | 
               | And in any case this is substantially different from
               | "you're an idiot because I disagree with you".
        
         | maizeq wrote:
         | I work in a AI research lab for a large tech company and my
         | observations are completely opposed to yours.
         | 
         | Almost every researcher I have spoken believes that real risk
         | exists, to some degree or other. Recent surveys of people in
         | industry have largely borne this out - your anecdote sounds
         | more like an anomaly to me.
        
           | ke88y wrote:
           | _> real risk exists, to some degree or other_
           | 
           | Risk of extinction? Or of bad outcomes?
           | 
           | I think everyone understands there are near-certain risks of
           | bad outcomes. That's already happening all around us. Totally
           | uncontroversial.
           | 
           | My post was about risk of extinction due to AI (x-risk), and
           | risk of extinction due in particular to run-away AGI (as
           | opposed to eg shit software accidentally launching a nuke,
           | which isn't really an AI-specific concern). I think that view
           | is still pretty eccentric. But please lmk if that's what you
           | meant.
           | 
           | I've been at several ai labs and large corps. You at deepmind
           | or openai by any chance? Just a guess ;)
        
         | d13 wrote:
         | Just like ChatGPT.
        
         | kalkin wrote:
         | Is there a way that you'd recommend somebody outside the field
         | assess your "90%" claim? Elsewhere in the thread you're
         | dismissive of that one survey - which I agree is weak evidence
         | in itself - and you also deny that the statements of leaders at
         | OpenAI, DeepMind and Anthropic are representative of
         | researchers there, which again may be the case. But how is
         | someone who doesn't know you or your network supposed to assess
         | this?
        
           | kalkin wrote:
           | Relatedly, it'd be helpful if you could point to folks with
           | good ML research credentials making a detailed case against
           | x-risk. The prominent example of whom I'm aware is Yann
           | LeCun, and what I've seen of his arguments take place more in
           | the evopsych field (the stuff about an alleged innate human
           | drive for dominance which AIs will lack) than the field in
           | which he's actually qualified.
        
             | YeGoblynQueenne wrote:
             | (Not the OP) In this article Le Cun argues in very concrete
             | technical terms about the impossibility of achieving AGI
             | with modern techniques (yes, all of them):
             | 
             |  _Meta 's AI guru LeCun: Most of today's AI approaches will
             | never lead to true intelligence_
             | 
             | https://www.zdnet.com/article/metas-ai-guru-lecun-most-of-
             | to...
             | 
             | Edit: on second thought, he gets maybe a bit too technical
             | at times, but I think it should be possible to follow most
             | of the article without specialised knowledge.
        
               | kalkin wrote:
               | I wouldn't describe most of what's in that interview as
               | "very concrete technical" terms - at least when it comes
               | to other people's research programs. More importantly,
               | while it's perfectly reasonable for LeCun to believe in
               | his own research program and not others, "this one lab's
               | plan is the one true general-AI research program and most
               | researchers are pursuing dead ends" doesn't seem like a
               | very sturdy foundation on which to place "nothing to
               | worry about here" - especially since LeCun doesn't give
               | an argument here why his program would produce something
               | safe.
        
               | YeGoblynQueenne wrote:
               | You can ignore that, of course he'll push his research.
               | But he never says that what he does will lead to AGI.
               | He's proposing a way forward to overcome some specific
               | limitations he discusses.
               | 
               | Otherwise, he makes some perhaps subtle points about
               | learning hidden variable models that are relevant to
               | modern discussions about necessarily learning world-
               | models in order to best model text.
        
           | ke88y wrote:
           | _> But how is someone who doesn 't know you or your network
           | supposed to assess this?_
           | 
           | IDK. And FWIW I'm not even sure that the leaders of those
           | organizations all agree on the type and severity of risks, or
           | the actions that should be taken.
           | 
           | You could take the survey approach. I think a good survey
           | would need to at least have cross tabs for experience level,
           | experience type, and whether the person directly works on
           | safety with sub-samples for both industry and academia, and
           | perhaps again for specific industries.
           | 
           | Also, the survey needs to be more specific. What does 5%
           | mean? Why 2035 instead of 2055? Those questions invite wild
           | ass guessing, with the amount of consideration ranging from
           | "sure seems reasonable" to "I spend weeks thinking about the
           | roadmap from here to there". And self-identified confidence
           | intervals aren't enough, because those might also be wild ass
           | guesses.
           | 
           | If _I_ answered these questions, I would give massive
           | intervals that basically mean  "IDK and if I'm honest I don't
           | know how others think they have informed opinions on half
           | these questions". I suspect a lot of the respondents felt
           | that way, but because of the design, we have no way of
           | knowing.
           | 
           | Instead of asking for a timeframe or percent, which is
           | fraught, ask about opinions on specific actionable policies.
           | Or at least invite an opportunity to say "I am just guessing,
           | haven't thought much about this, and [do / do not] believe
           | drastic action is a good idea"
        
             | kalkin wrote:
             | I think the 5% thing is at least meaningfully different
             | from zero or "vanishingly small", so there's something to
             | the fact that people are putting the outcome on the table,
             | in a way that eg I don't think any significant number of
             | physicists ever did about "LHC will destroy the world" type
             | fears. I agree it's not meaningfully different from 10% or
             | 2% and you don't want to be multiplying it by something and
             | leaning on the resulting magnitude for any important
             | decisions.
             | 
             | Anyway I expect that given all the public attention
             | recently more surveys will come, with different
             | methodologies. Looking forward to the results! (Especially
             | if they're reassuring.)
        
         | lo_zamoyski wrote:
         | > The "Great Minds And Great Leaders" types are rushing to warn
         | about the risks, as are a large number of people who spend a
         | lot of time philosophizing.
         | 
         | Except they're not philosophizing, not in any real sense of the
         | word. They're terrible at it. Most of them are frauds, quacks,
         | and pseudo-intellectual con artists (like Harari) who adore the
         | limelight offered to them by the media and a TED Talks-watching
         | segment of the public who are, frankly, intellectually out of
         | their depth, but enjoy the delusion and feeling of
         | participating in something they think is "intellectual".
        
           | tasuki wrote:
           | > frauds, quacks, and pseudo-intellectual con artists (like
           | Harari)
           | 
           | Uh, mind elaborating? Why is Harari that? Do you have any
           | examples of non-frauds and actual intellectuals?
           | 
           | > a TED Talks-watching segment of the public who are,
           | frankly, intellectually out of their depth, but enjoy the
           | delusion and feeling of participating in something they think
           | is "intellectual"
           | 
           | I'm afraid that would be me.
        
             | leftcenterright wrote:
             | I believe at least part of the reason for calling Harari
             | pseudo-intellectual is the use of misleading statements in
             | favor of storytelling in the books and also building up on
             | others' work with better storytelling.
             | 
             | More on this: https://www.currentaffairs.org/2022/07/the-
             | dangerous-populis...
        
             | leftcenterright wrote:
             | > I'm afraid that would be me.
             | 
             | I think a lot of us are that, personally speaking: honest
             | sincere analyses and investing time into critical analysis
             | almost always will bring out more than what we hear in a
             | talk. It does take a big amount of effort though than just
             | watching a talk while munching on some snack.
        
         | cubefox wrote:
         | The fact that such a depressingly ignorant comment gets upvoted
         | makes me reconsider whether HN is really "my tribe". :(
        
         | throwway120385 wrote:
         | In my opinion the weirdness is that everyone is talking about
         | how AI is so much better than humans and we're all going to get
         | replaced. But almost nobody is talking about how tech companies
         | will use this new technology to abdicate responsibility for the
         | decisions they undertake. If you thought redlining was bad, you
         | should see how mortgage companies treated people when they used
         | an AI to accomplish the same goals. No thought was given to the
         | effect of those decisions, only to the fact that this
         | mysterious and "all-knowing" computer system told them it was a
         | good decision. We're seeing the same thing with Facebook and
         | "algorithms": "It's not us! It's the algorithm! We don't know
         | why it does this." Which completely belies the fact that they
         | wrote the algorithm and they should be held responsible for its
         | effects.
         | 
         | We are about to enter a couple of decades of people using these
         | pithy model systems to make real decisions that impact lots of
         | people, and if I've learned anything in the past 20 years its
         | that the impacts that technologists and "emininent minds" are
         | predicting are nothing like what will actually happen. But
         | terrible, banal things will be done at the behest of these
         | systems and nobody is talking about it.
        
           | ke88y wrote:
           | _> But terrible, banal things will be done at the behest of
           | these systems and nobody is talking about it._
           | 
           | Well said. Fewer Terminator and War Games fantasies, more
           | boring risk analyses. Amen.
        
           | Sniffnoy wrote:
           | Plenty of people have been talking about that, from my
           | memory?
        
           | cycomanic wrote:
           | "little Britain" (if you don't know it go look it up you're
           | in for a treat), knew it already: "the computer says no".
        
           | mnky9800n wrote:
           | As well as these models will all be used to launder content.
           | In the near future you will likely be able to ask an AI to
           | create the next 15 sequels to the movie war games starring a
           | young matthew broderick. Episode 12 will have a karate kid
           | cross over. But how much of what is there will be an actually
           | new thing? How much will be scraped from my War Games fan fic
           | website that contains scripts of 27 sequels to War Games?
        
           | jasonwatkinspdx wrote:
           | A very glaring example of this problem is sentencing
           | guideline software that uses ML to suggest sentences to
           | judges. Anyone familiar with the very basics of ML as a
           | practitioner knows this is a terrible idea, very likely to
           | reproduce any biases or bigotry in the training dataset, but
           | the courts are increasingly treating them like blind just
           | oracles. This is going to go _very_ bad places imo.
           | 
           | The risk isn't some sort of rogue smarter than humans AI,
           | it's humans using AI to do the same stupid evils in a
           | deniable or even unknown way.
        
             | throwway120385 wrote:
             | The worst part is we have learned zero lessons from the
             | metamorphosis of "fraud" into "identity theft." In the
             | former case, the banks were responsible for preventing
             | "fraud" and when that became too easy they simply abdicated
             | all responsibility for this new form of computerized fraud
             | and created the notion of "identity theft" which is fraud
             | but with a computer.
        
         | sanp wrote:
         | The warnings are an admission from the "Great Minds and a great
         | Thinkers" that someone other than them has created something
         | previously thought impossible or, at the very least, several
         | years / decades out. So, I am not sure ego is at play here.
         | Perhaps someone close to the problem (your actual scientists on
         | the ground) is not capable / unwilling to accept the issues
         | raised as that has direct (negative) impact on what they have
         | created.
        
           | ke88y wrote:
           | Perhaps. It's also possible that they don't understand how
           | dangerous what they've created is because it feels so
           | "normal" and "pedestrian" to them. See also: nuclear
           | scientists getting radiation poisoning, I suppose. But that's
           | also true for all of the other people being discussed, I
           | would think.
           | 
           | But I do think "wants to be in the discourse and on top" is a
           | pretty strong correlate with the degree to which someone
           | characterizes these as "concerns" vs "x-risk".
        
         | rapnie wrote:
         | I don't know, but it may also be that the bench scientist is so
         | close their work and passion that they are more likely to
         | overlook or diminish the externalities, dangers and possible
         | consequences of their work. While the more generalist types,
         | the visionaries, futurists, and philosophers tend to have a
         | broader perspective that leads them to provide stern warning.
         | 
         | Isn't that how things go in so many technology fields? "Move
         | fast and break things" pressure to deliver and money,
         | reputation and fame involved in doing so are equally Ego-
         | related and leading to biases that make on "laugh or roll their
         | eyes".
        
       | layer8 wrote:
       | It's somehow fitting that he's being interviewed by Q.
        
       | hospitalJail wrote:
       | Lets take someone, who is past their prime and interview them on
       | a topic they have never worked on. Then we can mine it for
       | quotes!
       | 
       | >"I never imagined that computers would rival, let alone surpass,
       | human intelligence. And in principle, I thought they could rival
       | human intelligence. I didn't see any reason that they couldn't."
       | 
       | Yeah, so he got fooled by LLMs and hasnt been burned by it
       | failing to do the most basic logic.
       | 
       | I have an extremely basic question, that anyone in a (possibly
       | mandatory) high school science class would answer
       | correctly(although to be fair, it could be a 100 level college
       | question). It still cannot answer it correctly because there is
       | too many stay-at-home-parent blogs giving the wrong answer.
       | 
       | Its a language model and it fooled DH. It hasnt gotten smarter
       | than us yet. Its just faster at repeating what other humans
       | verbally said.
       | 
       | So what can we make of this interview? That we have someone
       | spouting opinions, and everyone else laughs at their opinion
       | since they are famous, old, and out of touch.
       | 
       | EDIT: I think LLMs are incredibly useful. I use it more than
       | Google. It doesnt mean its smarter than humans, it means google
       | is worse than LLMs. I can't even provide a list of all the uses,
       | but it doesnt mean they are taking advantage of an old man out of
       | their element.
        
         | taeric wrote:
         | I mean, this someone "who is past their prime" is a very
         | respected someone that almost certainly inspired a fair number
         | of the folks working in these fields.
         | 
         | So, yes, this is largely mining for quotes. But those are great
         | quotes to ponder and to echo back through the machines that are
         | research and progress to see where they lead.
         | 
         | It would be one thing if these were being taken as a "stop all
         | current work to make sure we are aligned with those that came
         | before us." I don't see it in that way, though. Rather, there
         | is value in listening to those that went before.
        
         | colechristensen wrote:
         | People overly impressed by LLMs haven't spent a lot of time
         | trying to make them actually useful.
         | 
         | When you do, you learn that they're talented mimics but still
         | quite limited.
        
           | TheOtherHobbes wrote:
           | This also applies to humans.
        
             | criddell wrote:
             | Most humans can mimic but can also describe their first-
             | hand experience of being scared or happy or heartbroken or
             | feeling desire etc...
        
           | nielsbot wrote:
           | Part of me wonders, though, could we "just" connect up an
           | inference engine and voila? We could really be on a cusp of
           | general AI. (Or it could be a ways off) That's a bit
           | frightening in several ways.
        
             | hackeraccount wrote:
             | I kind of expected AI to be AI - and not a mirror.
        
               | nielsbot wrote:
               | Meaning it wouldn't necessarily be human-like?
        
           | jurgenaut23 wrote:
           | Yes, I actually think this is true. See my above comment,
           | which supports your claim.
        
           | hackeraccount wrote:
           | This. LLM's have a surface that suggests they're an
           | incredibly useful UI. That usability is like the proverbial
           | hand full of water though - when you start to really squeeze
           | it, it just slips away.
           | 
           | I'm still not convinced that the problem isn't me though.
        
           | wilg wrote:
           | I think the reason to be impressed is that they do things
           | that were previously not possible. And they are absolutely
           | directly useful! Just not for everything. But it seems like a
           | very fruitful line of research, and it's easy to believe that
           | future iterations will have significant improvements and
           | those improvements will happen quickly. There's no sense
           | worrying about whether GPT4 is smarter than a human, the
           | interesting part is that it demonstrates that we have
           | techniques that may be able to get you to a machine that is
           | smarter than a human.
        
         | fipar wrote:
         | > Its a language model and it fooled DH. It hasnt gotten
         | smarter than us yet. Its just faster at repeating what other
         | humans verbally said.
         | 
         | I agree with this completely. That said, I think this part is a
         | bit unfair:
         | 
         | > Lets take someone, who is past their prime and interview them
         | on a topic they have never worked on.
         | 
         | AI has been a part of DH's work for decades. For most of that
         | time, he's dismissed the mainstream approach as being
         | intelligent in the Strange Loop sense, and was involved in
         | alternative approaches to AI.
         | 
         | Also, if we remove "faster", "repeating what other humans
         | verbally said" is something a lot of humans do, especially
         | little children. I think that may be the part that scares DH:
         | at this point, what these models are doing is not that
         | different (from a superficial POV) from what little kids or
         | even some adults do.
         | 
         | I still agree with you though, and I think what DH misses here
         | is the fact that IMHO there is no introspection at all in these
         | models. Somehow, in the case of humans, we go from parroting
         | like an LLM to introspection and actually original productions
         | over the course of some years (n = 2 for me, but I think any
         | parent can confirm this; watching this happen in front of my
         | eyes has been one of greatest experiences in my life), but I
         | can still understand how someone like DH would be confused by
         | the current state of affairs.
        
         | FrustratedMonky wrote:
         | >"Lets take someone, who is past their prime and interview them
         | on a topic they have never worked on. Then we can mine it for
         | quotes!"
         | 
         | So. Don't interview anybody over 40? Who judges who is past
         | their prime?
         | 
         | Mining for quotes is most interviews. Isn't this why interviews
         | happen?
         | 
         | He's been in the AI field for what? 30 Years? Has multiple
         | books.
         | 
         | I think he has earned enough respect to have an opinion, and it
         | is probably worth more than most people tossing out ad-hoc
         | opinions on AI in the last few months. Better than some
         | 'programmers', who are so in-the-weeds they have lost track of
         | what they are building.
        
         | gwright wrote:
         | > Its a language model and it fooled DH. It hasnt gotten
         | smarter than us yet. Its just faster at repeating what other
         | humans verbally said.
         | 
         | One of my take aways from LLMs is that humans very often just
         | repeating what other humans have said with only superficial
         | understanding of what they are repeating.
         | 
         | I think there is more to general intelligence than pattern
         | matching and mimicking but a disconcerting amount of our day to
         | day human interactions might just be that.
        
       | roody15 wrote:
       | The issue with AI is that it may put too much power into a small
       | groups hand. Imagine if you wanted to develop a weaponized prion
       | or weaponized virus. In todays world this is possible but
       | requires a state actor with systems of control, committees, over
       | sight, testing facilities. Due to human limits it also takes time
       | to complete.
       | 
       | Insert AI generation XIV .. a small group of cult fanatics with
       | only slightly above average IQ's band together and now get to
       | skip all these limitations and are able to jump ahead to a killer
       | aerosol prion delivery weapon system.
       | 
       | This group of people who follow their great leader (Jimbo)decide
       | to release the weapon to save innocent souls before an evil
       | daemon comet flies past the earth and turns all remaining humans
       | into evil spirits.
       | 
       | My silly story is to just illustrate that there are many people
       | with high IQ's that also have emotional issues and can fall prey
       | to cults, extremism, etc. Can humans be trusted using a AI with a
       | human IQ of 9000 which is able to simulate reality in seconds.
        
       | firebirdn99 wrote:
       | > But very soon they're going to be, they may very well be more
       | intelligent than us and far more intelligent than us. And at that
       | point, we will be receding into the background in some sense. We
       | will have handed the baton over to our successors, for better or
       | for worse.
       | 
       | Hinton also said, it's like we are a passing phase in evolution,
       | where we created these immortal beings.
       | 
       | Being that we are so bad at predicting the future, and taking
       | precautionary measures. See pandemic. Even all the alarm bells
       | sounding, we won't be able to do anything concrete here. It's
       | like we are mostly a reactive species, we don't have terribly
       | good incentives to act in foresight.
        
       | troft wrote:
       | [dead]
        
       | pbw wrote:
       | Hofstadter says humans will be like cockroaches compared to AI.
       | This is an oft-repeated line: sometimes we are ants or bacteria.
       | But I think these comparisons might be totally wrong.
       | 
       | I think it's very possible there's a Intelligence Completeness
       | theorem that's analogous to Turing Completeness. A theorem that
       | says intelligence is in some ways universal, and that our
       | intelligence will be compatible with all other forms of
       | intelligence, even if they are much "smarter".
       | 
       | Cockroaches are not an intelligent species, so they cannot
       | understand our thoughts. But humans are intelligent, human
       | languages have a universal grammar and can be indefinitely
       | extended with new words. I think this puts us in the intelligence
       | species club, and all species in that club can all discuss any
       | idea.
       | 
       | AI might eventually be able to think much quicker than us, to see
       | patterns and make insights better and faster than us. But I don't
       | think makes us cockroaches. I think if they are so smart, they
       | are by definition smart enough to explain us any idea, and with
       | effort we'll be able to understand it and contribute our own
       | thoughts.
        
         | ses1984 wrote:
         | I think you're extrapolating from yourself and you think you
         | could learn any field given enough time. What about the type of
         | person with a fixed mindset who thinks they aren't good at math
         | or chemistry, if ai can't train them for whatever reason, even
         | if the reason is the person is stubborn and/or willfully
         | ignorant, are they more like a cockroach than a person?
         | 
         | What if someone tries really hard for a long time and can't
         | learn a field? Do they fail the intelligence test, or does
         | their teacher?
        
           | pbw wrote:
           | I'm talking about the entire human species, not myself or any
           | one person. I'm saying that humans relating to AIs would not
           | be like cockroaches relating to people. Cockroaches don't
           | have human-level language, but we do, and I'm proposing it is
           | generative and extensible enough to explain any idea. I'm
           | proposing there's non-intelligent species and intelligent
           | species, but there's no intelligent++ species that would look
           | down on us as cockroaches. I'm claiming that won't happen.
        
         | fergal_reid wrote:
         | Spend some time around a three year old. Human, human
         | intelligence, language skills.
         | 
         | Then try explain quicksort to them. Obvious waste of time.
         | 
         | They wouldn't be much threat in a zero sum strategic
         | interaction either.
        
         | wslh wrote:
         | > A theorem that says intelligence is in some ways universal,
         | and that our intelligence will be compatible with all other
         | forms of intelligence, even if they are much "smarter".
         | 
         | I would not be so reductionist. Intelligence doesn't seem to be
         | an universal thing, even IQ (a human invented metric) is
         | measured in terms of some statistics. If you have an IQ of ~60
         | you have intelligence but a completely different one from an IQ
         | >85.
         | 
         | > But humans are intelligent, human languages have a universal
         | grammar and can be indefinitely extended with new words. I
         | think this puts us in the intelligence species club, and all
         | species in that club can all discuss any idea.
         | 
         | Humans have different intelligences. You can be intelligent
         | (per the human intelligence definition) but a math ignorant.
         | Again, this implies intelligence as we know it is not an
         | universal thing at higher levels: not all people can have a
         | physics Ph.D. as not all people could be a good artist where
         | good techniques are recognizable, same for music, etc.
         | 
         | Yes, a cockroach is in another level of intelligence (or non-
         | intelligence) but that does not mean there is not a super-
         | intelligence that makes us relative cockroach.
         | 
         | Also, without any intention of talking about religion or
         | "intelligent design", we can theorize that the Universe is
         | supersmart because it creates intelligent creatures, even if it
         | is not conscious about that. I would be very catious to define
         | intelligence in an universal way.
        
           | pbw wrote:
           | My point is you cannot teach a cockroach calculus, but if AIs
           | invent a new type of math, they would be able to teach it to
           | us. That's my claim. So the analogy of "we are cockroaches
           | compared to the AI" is wrong, that won't be the case.
           | 
           | Once you have "enough" intelligence to have a complex
           | language, like we do, I'm claiming you are in the club of
           | intelligent species, and all species in that club can
           | communicate ideas with each other. Even if the way they
           | natively think is quite different.
        
             | saulpw wrote:
             | AlphaZero invented new moves in the game of Go, but it
             | can't 'teach' them to us, it can only show us the moves and
             | let us figure it out for ourselves (which we're doing). But
             | note that despite this transfer of knowledge, humans didn't
             | rise up to the level of AlphaZero, and they may never be
             | able to. As a sibling comment points out, some things are
             | computationally bound--and humans have a limit to
             | computational ability (can't give ourselves more
             | neurons/connections), whereas AI does not.
        
             | orlp wrote:
             | > but if AIs invent a new type of math, they would be able
             | to teach it to us
             | 
             | There are already math proofs made by humans on this very
             | day that are hundreds upon hundreds of pages of lemmas that
             | are highly advanced and building on other advanced results.
             | Understanding such a proof is an undertaking that literally
             | takes years. An AI might end up doing it in minutes. But
             | what an AI could cook up in years could take a human...
             | several lifetimes to understand.
             | 
             | As another example, take the design and fabrication of a
             | modern microprocessor. There are so many layers of
             | complexity involved, I would bet that no single person on
             | this planet has all the required knowledge end-to-end
             | needed to manufacture it.
             | 
             | As soon as the complexity of an AI's knowledge reaches a
             | certain point, it essentially becomes unteachable in any
             | reasonable amount of time. Perhaps smaller sub-parts could
             | be distilled and taught, but I think it's naive to assume
             | all knowledge is able to be sliced and diced to human-bite-
             | sized chunks.
        
               | pbw wrote:
               | I agree "one AI" might produce output that keeps humans
               | busy for decades. But that doesn't make us cockroaches.
               | Cockroaches can't understand language, at all. You can't
               | teach a cockroach calculus if had a trillion years.
               | That's not our position relative to AIs. We will be
               | learning shit-tons from them constantly. I think people
               | who say humans will be "cockroaches" or "ants" or
               | "bacteria" are fear-mongering, or just confused.
        
             | arketyp wrote:
             | I'm reminded of how one writes programs. I cannot maintain
             | the state of the machine in my head, but I can convince
             | myself of its workings, its intelligence, by reading the
             | code, following along on its line of reasoning as it were.
             | I think the Intelligence Completeness may boil down to the
             | very same Church-Turing thesis.
        
               | pbw wrote:
               | Yes I agree it might be same thing under the hood. But
               | with intelligence many very smart people seem to fall
               | into using these analogies that diminish humans in a way
               | I don't think is accurate. And I feel that makes people
               | more scared of AI than they need to be, makes AI seem
               | totally alien. [1]
               | 
               | The AIs might spit out entire fields of knowledge, and it
               | might take humans decades of study to understand it all.
               | And no single human might actually understand it all at
               | the same time. But that's how very advanced fields of
               | study already are.
               | 
               | But the "cockroach" slur implies AIs would be in this
               | other stratosphere having endless discussions that we
               | cannot remotely grok. My guess is that won't happen.
               | Because if the AI were to say "I cannot explain this to
               | you" I'd take that as evidence it wasn't all that
               | intelligent after all.
               | 
               | [1] - https://metastable.org/alien.html
        
         | jimbokun wrote:
         | Isn't the difference between us and cockroaches just "we can
         | think much quicker, see patterns and make insights better and
         | faster than cockroaches"?
        
           | pbw wrote:
           | I think the key difference is language. I think human
           | language is above a key threshold. Our syntaxes are
           | infinitely generative and we have large vocabularies
           | (100,000+ words) which are fully extensible. No other animals
           | have that. My claim is AIs will be able to express any
           | complex ideas in our language. But we cannot express our
           | ideas in "cockroach language". So the analogy is not a good
           | one.
        
         | scrawl wrote:
         | >smart enough to explain us any idea
         | 
         | Are dogs, or pigs, or whales, part of the intelligence club?
         | They are clearly intelligent beings with problem-solving
         | skills. We won't be teaching them basic calculus any time soon.
        
           | pbw wrote:
           | No non-human animals are in the club that's marked by having
           | a language with an infinitely generative syntax and a large
           | (100,000+ words) and always-growing vocabulary.
           | 
           | Intelligence might be a spectrum, but powerful generative
           | language is a step function: you have it or you don't. If you
           | have it, then higher intelligences can communicate complex
           | thoughts to you, if you don't they can't. We have it, so we
           | are in the club, we are not cockroaches.
        
         | hyperthesis wrote:
         | Humans have limited "working memory". We manage to cram more
         | into it via hierarchical decomposition into "chunks", a single
         | concept that is more complex inside.
         | 
         | I submit that not everything can be hierarchically decomposed
         | in a way that's useful - i.e. any "abstraction" you try to
         | force on it is more leaky than non-leaky; in that it doesn't
         | simplify its interactions with other chunks. You might say it's
         | the wrong abstraction - but there's no guarantee there is a
         | right abstraction. Some things are just complex. (This is
         | hypothetical, since I don't think we can conceive of any
         | concepts we can't understand.)
         | 
         | An AI could have an arbitrarily large working memory.
         | 
         | Note: I'm talking about intuitive understanding. We could use
         | it mechanically, just never "get it", cowering before icons,
         | being the one in Searle's Chinese Room
         | https://wikipedia.org/wiki/Chinese_room
        
           | pbw wrote:
           | I suspect the limit of what can be expressed in human
           | language and comprehended by the human mind is vast, but yes,
           | not infinite. I think the AIs will absolutely saturate the
           | connection between them and us, with a non-stop torrent of
           | information which will range from useful to civilization-
           | changing.
           | 
           | And I think this is all very unlike how we are currently
           | impacting the lives of cockroaches with our insights about
           | quantum computing. Thus, it's not a good analogy.
        
       | gundmc wrote:
       | A very interesting conversation, but the article really makes the
       | reader work to understand what his previous criticisms were and
       | how they have now changed. It feels like the author assumes the
       | reader has been closely following academic discourse on the
       | subject. Maybe that's a fair assumption for their typical readers
       | but it does make the article less accessible for newer readers.
        
       | facu17y wrote:
       | There is no AI risk. There is only the risk of bad or
       | "unfavorable" actors using AI.
       | 
       | AI in a killer drone unleashed on civilians? The bad actor is the
       | one who deployed this weapon.
       | 
       | AI given agency and goal maximization ending up gaining physical
       | form all on its own and killing people? or hacking into bank
       | accounts to enrich its creator?
       | 
       | The latter more likely than the former, but for cyber-offensive
       | AI there is cyber-defensive AI.
       | 
       | Musk lately admitted that he's an AI accelerationist (following
       | lots of the e/acc people and liking their posts) and despite his
       | dystopian view of AI he's pushed it very hard at Tesla. He just
       | wants the US to give him control of it (under the pretext that no
       | one else can manage it safely.)
        
         | Marcan wrote:
         | Your first sentence literally mirrors this slogan:
         | 
         | https://en.m.wikipedia.org/wiki/Guns_don%27t_kill_people,_pe...
        
           | facu17y wrote:
           | Show me how I can kill you with my LLM, or my GAN or
           | Diffusion Model.
           | 
           | I dare you!
        
       ___________________________________________________________________
       (page generated 2023-07-03 23:01 UTC)