[HN Gopher] Vernor Vinge has died
       ___________________________________________________________________
        
       Vernor Vinge has died
        
       Author : sohkamyung
       Score  : 886 points
       Date   : 2024-03-21 06:08 UTC (16 hours ago)
        
 (HTM) web link (file770.com)
 (TXT) w3m dump (file770.com)
        
       | demaga wrote:
       | Haven't read his fiction works yet, but his singularity piece is
       | very interesting:
       | https://edoras.sdsu.edu/~vinge/misc/singularity.html
        
         | nabla9 wrote:
         | "Rainbows End" is his singularity book.
        
           | tialaramex wrote:
           | I would argue that all of Vinge's longer works are about
           | Singularitarian disasters. In Tatja we eventually figure out
           | that Tatja herself is arguably the disaster. In Fire it was
           | asleep in the library, and I think in Rainbows there's both
           | Rabbit obviously and the weapon the story focuses on.
           | 
           | You can think of the apparent survival of Rabbit as a hint of
           | doom right at the end, like the fact R's diary is in the
           | slush pile at the end of the Watchmen comic book.
        
             | nabla9 wrote:
             | Vinge's technological singularity is explosion of things
             | changing, not "rapture of nerds".
        
             | Vetch wrote:
             | I disagree with your characterization of Vinge's works as
             | primarily about disasters but I agree they were all about
             | an accelerating technological pace and its relation with
             | intelligence.
             | 
             | I'm fairly certain the mysterious event in Marooned in
             | Realtime was Ascension.
             | 
             | For Fire Upon Deep, it was sealed and there was a powerful
             | countermeasure.
             | 
             | The rabbit of Rainbows End felt like a trickster to me.
             | Child-like playfulness, fey-like chaotic neutral at worst.
             | I do not interpret Rabbit's survival as hints of doom. The
             | weapon was plain old human abuse of power for control.
        
               | tialaramex wrote:
               | I think I've said "catastrophes" before rather than
               | "disasters" and I think that's a better word, but I stand
               | by it.
               | 
               | It doesn't matter that Rabbit doesn't intend harm.
               | Neither does Tatja, at least to those who aren't trying
               | to harm her. But well, look at what she does, at first
               | she almost gets a few dozen people killed, reckless
               | teenager but hardly extraordinary, next time we see her
               | she's about to tear apart a kingdom to fraudulently seize
               | power, and as collateral she's (without telling them)
               | ensured everybody she knew previously will die if she
               | fails. By the end Tatja has started a war in order to
               | seize control of a means to signal off world. Only two
               | other people on her world even realises what "signalling
               | off world" would even mean, but she's potentially going
               | to kill huge numbers of people to achieve it anyway.
               | She's a catastrophe even though that wasn't her intent.
               | She does apologise, for whatever it's worth, right at the
               | very end, to people who were close to her and from whom
               | she belatedly realises she is now so distant.
               | 
               | Rabbit is indeed just playing. When the library nearly
               | falls over and kills a _lot_ of university staff and
               | students, that 's just a small taste of what happens when
               | playful Rabbit forgets for a moment that this isn't
               | really just a game. Consider just how powerful Rabbit is
               | remembering that's a _distraction_. The whole fight,
               | which causes massive disruption to the city and easily
               | could have led to enormous loss of life, isn 't what
               | Rabbit was really doing, it was just to distract Bob's
               | team so that they don't focus on the labs for a few
               | hours. And remember that Rabbit's goal here is clearly to
               | secure the weapon for itself, not to deny it to the
               | antagonist.
        
               | underlipton wrote:
               | This is a compelling argument, but I think it's overly
               | pessimistic. Back on the human side, the ending sees
               | Robert adapting to his situation; he loses his left arm
               | (his "sinister"), and it looks like he's lost his wife
               | for good, but he's managed to find some amount of synergy
               | with the new world and technology he's surrounded by.
               | Combined with Rabbit's temporary "defeat" (an experience
               | that, if he's truly a super-intelligence capable of true
               | learning and growth, should lead him to different means
               | and even ends in the future, if nothing else), the
               | implicit conclusion seems to be a future with an
               | imperfect but livable melding of humanity and technology.
               | Not too different from what's come before. Putting all of
               | human history onto a single drive likewise might seem
               | like a diminishing of its significance, but the fact is
               | that it's still there to dive into, should one desire.
               | That's arguably a step up from the past.
        
             | NoMoreNicksLeft wrote:
             | The Rabbit had a sense of morality. I do not think it
             | intended to enslave or destroy humanity, or any other
             | monstrous end. It kept bargains that it could have cheated,
             | when cheating those bargains cost it nothing. This is at
             | least a hint of a sense of justice. The Rabbit was likely
             | the adversary of some other entity, perhaps something very
             | Blight-like.
        
           | NoMoreNicksLeft wrote:
           | I'm half-convinced that the Rabbit was an ancient trickster
           | god, and not an AI. Is AI even the correct term? If the
           | Rabbit was a technological non-human intelligence, then
           | surely it was never created (even by accident), and
           | emerged/grew from the computosphere. No governments seemed to
           | be aware of any other government having created it, and two
           | of the nearly-main characters were special operatives tasked
           | with knowing about shit like that and shutting it down before
           | it could result in doomsday scenarios.
           | 
           | I suspect very strongly that had we gotten a followup or two,
           | it would have turned out that the Rabbit had been around for
           | a very long time before even the first transistor.
        
           | hinkley wrote:
           | Marooned in Space Time is about people who missed the
           | singularity.
        
       | progbits wrote:
       | I knew "Fire upon the deep" would be a good book just few pages
       | in, where in acknowledgements Vinge thanks "the organizers of the
       | Arctic '88 distributed systems course at the University of
       | Tromso".
        
       | growt wrote:
       | Doesn't he deserve the black bar on top of HN?
        
         | ilaksh wrote:
         | Yes. I assume the admin is sleeping. @dang
        
           | layer8 wrote:
           | I don't think "@dang" is doing anything. You need to email
           | hn@ycombinator.com.
        
         | dbuxton wrote:
         | +1
        
         | sdeer wrote:
         | +1
        
         | _0ffh wrote:
         | agreed!
        
         | ompogUe wrote:
         | +1
        
       | arethuza wrote:
       | _" So High, So Low, So Many Things to Know."_
        
       | re wrote:
       | If you haven't read A Fire Upon The Deep (or even if you already
       | have), you can read the prologue and first few chapters here:
       | https://www.baen.com/Chapters/-0812515285/A_Fire_Upon_the_De...
        
       | jl6 wrote:
       | Oh man, this makes me sad.
       | 
       | I remember reading _A Fire Upon the Deep_ based on a Usenet
       | recommendation, and then immediately wanting to read everything
       | else he wrote. _A Deepness in the Sky_ is a worthy sequel.
       | 
       | He wasn't prolific, but what he wrote was gold. He had a
       | Tolkienesque ability to build world depth not by lengthy
       | exposition, but by expert _omission_.
       | 
       | A true name in sci-fi.
        
         | emmelaich wrote:
         | 'true name'?
         | 
         | https://en.wikipedia.org/wiki/True_Names
         | 
         | THE cyberpunk book.
         | 
         | Also, his later books are great but the "Across Realtime"
         | trilogy has a special place in my heart.
         | https://www.goodreads.com/en/book/show/167844
        
           | fossuser wrote:
           | I'm pretty sure Yudkowsky read true names and it's what
           | caused him to focus his life on the alignment problem.
           | 
           | That novella is basically an illustrated warning of
           | misaligned super intelligence (it's also really good!)
        
           | hinkley wrote:
           | I still want augmented Chess to be a sport. You get a
           | computer not weighing more than X pounds.
        
         | TimSchumann wrote:
         | I'm stuck halfway through Deepness in the Sky, I should pick it
         | up again.
         | 
         | Also stuck on book 8 of the Wheel of Time series, I was like 5
         | chapters in and didn't pick up a single thread I cared about
         | from the previous book.
         | 
         | Agree about the expert omission part.
        
           | komaromy wrote:
           | _Deepness_ was well worth it.
           | 
           | Wheel of Time, on the other hand, I was very glad to give up
           | on right around the same point as you.
        
           | joshstrange wrote:
           | I think WoT is worth pushing through, you got stuck in the
           | same spot a number of people do. There is definitely a lull
           | there.
           | 
           | Many times I've considered re-cutting the books/audio-books
           | for WoT to remove what I find to a be drudgery but it would
           | be a massive task that I'm not up to. I just skip over the
           | parts in my re-reads of the series.
           | 
           | I'll be the first to say that WoT has /many/ flaws but it
           | will forever hold a special place in my heart. You just have
           | to get past the way women are written in the series (and I
           | understand if you can't). That's something else I'd be happy
           | to prune out or ideally fix but that's well beyond my skill
           | set. Elaine and Egwene especially are horribly written in the
           | last few books (and it's not all Brandon Sanderson's fault I
           | assume, they aren't great in the prior books either).
        
           | hinkley wrote:
           | Book eight may have been about when I gave up and sold the
           | set. The dull bit in the middle of each book is when I would
           | practice my speed reading.
           | 
           | About once a season I contemplate the idea of approaching
           | either Sanderson or the Jordan estate and ask that they
           | consider an abridged edition edited by Sanderson. You could
           | easily knock 1500 pages out of this series and not change a
           | single thing.
           | 
           | Meanwhile Rosamund Pike is doing new audio books for the
           | series and the samples sound much better than the old one.
           | But the first one is about forty hours. As much as I might
           | like to claim that I would listen to her read a phone book, I
           | don't think I can listen to 600 hours of audiobooks for one
           | series.
        
         | angiosperm wrote:
         | Let us not neglect _The Peace War_ and _Across Realtime_. The
         | former introduced memorable tragic figures, besides its
         | singular vision.
        
         | moomin wrote:
         | Bizarrely, there's a second sequel to A Fire Upon The Deep, but
         | it's never been digitised.
        
           | SECProto wrote:
           | _Children Of The Sky_ is certainly available digitally.
        
             | throw1234651234 wrote:
             | It's a bad book, nowhere close to the first two in any
             | regard.
        
               | NoMoreNicksLeft wrote:
               | It's not bad. It looks to be what would have become the
               | first of a trilogy. It's just slow and sets the stage for
               | something that culminates in another Fire Upon the Deep
               | tier finale.
        
               | stormking wrote:
               | People wanted to read how Pham Nuwen defeated the
               | Emergents, learn more about the Zones of Thought or see
               | the Blight finally destroyed once and for all.
               | 
               | No one wanted more Game of Dogs.
        
               | db48x wrote:
               | Well, the characters are stuck on a primitive planet in
               | the Slow Zone so if you go in expecting Space Opera then
               | you'll be disappointed. If you go in with a more open
               | mind then you may find that there's actually an
               | interesting philosophical point to be examined and a
               | decent story built around it.
        
               | bkcooper wrote:
               | _Well, the characters are stuck on a primitive planet in
               | the Slow Zone so if you go in expecting Space Opera then
               | you'll be disappointed._
               | 
               | Except half of _Fire Upon the Deep_ was characters on the
               | same planet but it was actually cool. The first two books
               | are definitely among my favorite sci-fi of all time, the
               | third one was a dud.
               | 
               | My main gripe is that these three books all share the
               | same trope that underpins one of the major subplots:
               | glib, charming politician type is scheming, eeeeevil. In
               | the first two books, there's enough novelty (how the
               | Tines and Spiders work, programming as archaeology,
               | localizer mania) to make up for that. But I don't really
               | think the third book adds much in the same way, and it is
               | also very clearly building to a confrontation that will
               | happen in a future book. So the staleness is much more
               | noticeable
        
         | throw1234651234 wrote:
         | VV is up there with Stephenson and Gibson as the top 3. I don't
         | put Asimov, etc in there since Asimov was hard sci-fi to the
         | max and couldn't write a character to save his life, much like
         | later Stephenson.
         | 
         | I wish I could find something else like VV's work that's sort
         | of under-the-radar. I do have to mention that things like The
         | Three Body Problem get hype, but are several tiers below VVs
         | work.
        
           | bosquefrio wrote:
           | Vinge is certainly one of the greats but so is David Brin. I
           | would not consider him under the radar though. Some of his
           | best are Earth, The Heart of The Comet, Glory Season.
        
             | throw1234651234 wrote:
             | I don't know Brin at all, my first thought was "Sergey?!" -
             | will check out his books and appreciate the recommendation.
        
             | selimthegrim wrote:
             | Brin has a post on his FB wall mourning Vinge.
        
               | bosquefrio wrote:
               | I believe they were friends. Brin mentioned that he hung
               | out with Vinge a few weeks ago.
        
           | FromOmelas wrote:
           | not quite the same, but Iain M. Banks is in my top 5, along
           | with Vernor Vinge.
        
       | jonathanleane wrote:
       | This guy was one of the greats. A deepness in the sky (the
       | sequel) is one of my favourite sci fi books of all time, and even
       | better than Fire upon the deep imo.
        
         | Voultapher wrote:
         | Thomas Nau is such a fantastic villain. Not evil for the sake
         | of evil, but rather reasoned decisions with terrible prices.
        
           | atemerev wrote:
           | Reasoned decisions (if you think that empire building is
           | reasonable) without morality and empathy _are_ evil. This is
           | how Putin operates.
           | 
           | Also, raping and torturing are very "evil for the sake of
           | evil", if you ask me.
        
             | smogcutter wrote:
             | Yeah, discovering Nau's chamber of horrors is meant to
             | strip any illusions about his motivations.
        
               | Voultapher wrote:
               | Book spoilers.
               | 
               | IIRC wasn't it the chamber/ship of someone he worked
               | with, that he tolerated? Read it like six or seven years
               | ago, so the details are fuzzy. The impression I kept was
               | that he did a lot of evil stuff not because he relished
               | the suffering he created in others, but because he didn't
               | mind it.
        
               | vvillena wrote:
               | Yes, it was not his chamber, but Nau never wanted one
               | because he kept a pet in the open.
        
               | int_19h wrote:
               | It's both. On one hand, he is aware that one of his
               | valued subordinates "needs" to regularly murder people,
               | and doesn't consider it an issue so long as that
               | subordinate remains productive and is kept in check to
               | avoid "wasting resources".
               | 
               | But there's _also_ a record of him personally torturing
               | and raping one of the captives for the sake of it - which
               | he keeps around, presumably to rewatch every now and
               | then.
        
           | rakejake wrote:
           | I think Greg Egan in one of his novels has a line that goes
           | like "Humans cannot be universe conquerors if they don't
           | overcome their bug like tendencies to invade and destroy".
           | Nah, it is this very tendency that makes them universe
           | conquerors. Nothing to beat good old fashioned greed and
           | discontent.
        
           | natechols wrote:
           | > Not evil for the sake of evil, but rather reasoned
           | decisions with terrible prices
           | 
           | The Emergents and their system are pretty clearly just evil,
           | and there's never any indication given that they actually
           | care about those terrible prices, or even reflect on them for
           | long. Vinge is very good at channeling the Orwellian language
           | that regimes like these use, but I didn't find his intent at
           | all ambiguous.
           | 
           | The really compelling and ambiguous character in that book is
           | [redacted spoiler], who really does grapple with the moral
           | implications of his decisions, but ultimately chooses the
           | not-evil path. Personally I think this also highlight's
           | Vinge's biggest flaw as an author for me, which is that in
           | all of his books, the most fully realized and believable
           | protagonist is a scheming megalomaniac, with second place
           | going to the abusive misanthrope of Rainbows End, and third
           | to the prickly settlement leader in Marooned in Realtime. All
           | of the more sympathetic characters feel like empty vessels
           | that just react to the plot.
        
         | rakejake wrote:
         | A Deepness in the Sky was perhaps the first "hard sci-fi" novel
         | I ever read (this was before I knew of Greg Egan). The concept
         | of spiders and the onOff planet was just awe-inspiring.
         | 
         | While Egan's idea-density is off the charts, I found Deepness
         | in the Sky to be the most complete and entertaining hard-scifi
         | novel. It has a lot of novel science but ensures that the
         | reader is never overwhelmed (Egan will have you overwhelmed
         | within the first paragraph of the first page). Highly
         | entertaining and interesting.
         | 
         | I wonder what Vinge thought of LLMs. If you've read the book,
         | Vinge had literal human LMs in the novel to decode the Spider
         | language. Maybe he just didn't anticipate that computers could
         | do what they do today.
         | 
         | A huge loss indeed.
        
           | rsynnott wrote:
           | > If you've read the book, Vinge had literal human LMs in the
           | novel to decode the Spider language. Maybe he just didn't
           | anticipate that computers could do what they do today.
           | 
           | I mean, I don't think LLMs have been notably useful in
           | decoding unknown languages, have they?
        
             | rakejake wrote:
             | No idea, though being next-token predictors, it can't hurt
             | to use LLMs?
        
             | jerf wrote:
             | All currently-unknown real languages that an LLM might
             | decode are languages that are unknown because of a lack of
             | data, due the civilization being dead. An LLM won't
             | necessarily be able to overcome that.
             | 
             | In the book the characters had access to effectively
             | unbounded input since it was a live civilization generating
             | the data, plus they had reference to at least some video,
             | and... something else that would be very useful for
             | decoding language but would constitute probably a medium-
             | grade spoiler if I shared, so there's another relevant
             | difference.
             | 
             | Still, it should also be said it wasn't literally LLMs, it
             | was humans, merely, "affected" in a way that they are
             | basically all idiot savants on the particular topic of
             | language acquisition.
        
               | rsynnott wrote:
               | Oh, yeah; I'm just not convinced there's any particular
               | reason to think that LLMs would be useful for decoding
               | languages.
               | 
               | (That said it would be an interesting _experiment_, if a
               | little hard to set up; you'd need a live language which
               | hadn't made it into the LLM's training set at all, so
               | you'd probably need to purpose-train an LLM...)
        
               | justsomehnguy wrote:
               | LLMs are.. not bad at finding some semantic relationships
               | between some arbitrary data. Sure, if you dump an unknown
               | language into LLM then you can only receive a
               | semantically correct sentences of unknown meaning, but as
               | you start to decode the language itself it would be way
               | easier to find the relationships there, if not just
               | outright replacing the terms with a translated ones.
        
           | n4r9 wrote:
           | > Vinge had literal human LMs in the novel to decode the
           | Spider language.
           | 
           | Could you elaborate on this? It's been a while since I read
           | the novel. I remember the use of Focus to create obsessive
           | problem-solvers, but not sure how it relates to generative
           | models or LLMs.
           | 
           | Thinking about it, I'm not sure how useful LLMs can be for
           | translating entirely new languages. As I understand it they
           | rely on statistical correlations harvested from training data
           | which would not include any existing translations by
           | definition.
        
             | rakejake wrote:
             | I do not recall the exact details but I remember that some
             | of the focused individuals were kept in a grid or matrix of
             | some sort. The aim of these grids were to translate the
             | spider-talk and achieve some form of conversation with the
             | spiders on the planet. It is also mentioned that the
             | focused individuals have their own invented language with
             | which they communicate to other focused individuals, which
             | is faster and more efficient than human languages.
             | 
             | I may be misremembering certain details, but the similarity
             | to neural networks and their use in machine translation was
             | quite apparent.
        
               | NoMoreNicksLeft wrote:
               | The zipheads were crippled with a weaponized virus that
               | turned them all into autistic savants. The virus was
               | somewhat magnetic, and using MRI like technologies, they
               | could target specific parts of the brain to be affected
               | to lesser or greater degrees. It's been awhile since I've
               | re-read it, but "focused" was the propaganda label for it
               | from the monstrous tyrannical regime that used it to turn
               | people into zombies, no?
        
               | rakejake wrote:
               | Yes, they could target specific portions of the brain.
               | Have to re-read the book!
        
               | db48x wrote:
               | Not zombies, but loving slaves. People able to apply all
               | of their creativity and problem-solving skills to any
               | task given to them, but without much capacity for
               | reflection or any kind of personal ambitions or desires.
        
       | rkachowski wrote:
       | > We spanned a pretty wide spectrum - politically! Yet, we KBs
       | [Killer B's] (Vernor was a full member ... )
       | 
       | Does this mean something other than a wu tang fan?
        
         | B1FF_PSUVM wrote:
         | I think that was later ... In the 1980s there was a surfeit of
         | B-initial SF writers (as in the article's picture).
         | 
         | Also there was some media noise about killer bees, and they
         | became part of the standard dungeon zoo, I think. Plus,
         | possibly just "the Bees" would prompt "What, the Bee Gees?", a
         | terrible risk.
        
         | zem wrote:
         | the "killer B's" originally referred to Greg Bear, Gregory
         | Benford, and David Brin
        
       | Voultapher wrote:
       | :(
       | 
       | Currently reading The children of the sky. And wow I had somewhat
       | forgotten how good sci-fi can be. So much depth, such coherent
       | and well thought out worlds.
        
       | Klaster_1 wrote:
       | Oh man, that's sad to hear. I really loved his books, especially
       | the ones that looked into the future from a modern day engineer
       | point of view. "Rainbows End" comes to my mind quite often when
       | as I read the tech news, it paints a picture of a future that
       | seems to get closer day by day - a sci-fi that you can one
       | realistically believe to live in one day.
        
       | reducesuffering wrote:
       | It is astonishing how many of the great sci-fi writers are/were
       | around California: Vernor, PK Dick, Ursula Le Guin, Huxley, Frank
       | Herbert, Bradbury, Heinlein, Niven, etc. Per-capita has to be
       | several orders of magnitude higher.
        
         | 082349872349872 wrote:
         | Le Guin and Niven being the only ones of your list who were
         | born there; all the rest explicitly chose to be there.
        
         | zabzonk wrote:
         | you could probably say the same about the east coast, and NY in
         | particular - pohl, bester et al.
        
         | sph wrote:
         | They always had the best LSD. Not necessary, but won't hurt to
         | explore and experience the depth of your creativity.
        
         | nabla9 wrote:
         | Scotland's per capita big name sci-fi writer contribution must
         | be 1 or 2 orders of magnitude higher still.
        
       | aaron695 wrote:
       | Interview from ReasonTV -
       | https://www.youtube.com/watch?v=alxyAeCPits (2011)
       | 
       | I love the cyberpunk vibe of "Prisoners of Gravity" (1992) but it
       | only discusses "Marooned in Realtime"-
       | https://www.youtube.com/watch?v=ERRA8qXvuyU&t=87s
        
       | steve1977 wrote:
       | He wrote some of my favorite sci-fi books. I was aware he wasn't
       | in good health for a while already, it's still sad to hear about
       | his passing of course. Thank you for the worlds you showed me.
        
       | Simon321 wrote:
       | He coined the concept 'singularity' in the sense of machines
       | becoming smarter than humans what a time for him to die with all
       | the advancements we're seeing in artificial intelligence. I
       | wonder what he thought about it all.
       | 
       | >The concept and the term "singularity" were popularized by
       | Vernor Vinge first in 1983 in an article that claimed that once
       | humans create intelligences greater than their own, there will be
       | a technological and social transition similar in some sense to
       | "the knotted space-time at the center of a black hole",[8] and
       | later in his 1993 essay The Coming Technological
       | Singularity,[4][7] in which he wrote that it would signal the end
       | of the human era, as the new superintelligence would continue to
       | upgrade itself and would advance technologically at an
       | incomprehensible rate. He wrote that he would be surprised if it
       | occurred before 2005 or after 2030.
       | 
       | Looks like he was spot on.
        
         | trenchgun wrote:
         | He popularized and advanced the concept, but originally it was
         | by von Neumann.
        
           | nabla9 wrote:
           | The concept predates von Neuman.
           | 
           | First known person to present the idea was mathematician and
           | philosopher Nicolas de Condorcet in the late 1700s. Not
           | surprising, because he also laid out most ideals and values
           | of modern liberal democracy as they are now. Amazing
           | philosopher.
           | 
           | He basically invented the idea of ensemble learning (known as
           | boosting in machine learning).
           | 
           | Nicolas de Condorcet and the First Intelligence Explosion
           | Hypothesis
           | https://onlinelibrary.wiley.com/doi/10.1609/aimag.v40i1.2855
        
             | n4r9 wrote:
             | That kind of niche knowledge is what I come to HN for!
        
               | protomolecule wrote:
               | Also "Darwin among the Machines"[0] written by Samuel
               | Butler in 1863, that's 4 years after Darwin's "On the
               | Origin of Species".
               | 
               | Butlerian jihad[1] is the war against machines in the
               | Dune universe.
               | 
               | [0]
               | https://en.wikipedia.org/wiki/Darwin_among_the_Machines
               | 
               | [1] https://dune.fandom.com/wiki/Butlerian_Jihad
        
               | jhbadger wrote:
               | Butler also expanded this idea in his 1872 novel Erewhon,
               | where he described a seemingly primitive island
               | civilization that turned out to once had greater
               | technology than the West, including mechanical AI, but
               | they abandoned it when they began to fear its
               | consequences. A lot of 20th century SF tropes in the
               | Victorian period.
               | 
               | https://en.wikipedia.org/wiki/Erewhon
        
             | jart wrote:
             | That essay is written by a political scientist. His
             | arguments aren't very persuasive. Even if they were, he
             | doesn't actually cite the person he's writing about, so I
             | have no way to check the primary materials. It's not like
             | this is uncommon either. Everyone who's smart since 1760
             | has extrapolated the industrial revolution and imagined
             | something similar to the singularity. Malthus would be a
             | bad example and Nietzsche would be a good example. But John
             | von Neumann was a million times smarter than all of them,
             | he named it the singularity, and that's why he gets the
             | credit.
        
               | tim333 wrote:
               | There are some quotes but they guy seems to be talking
               | about improving humans rather than anything AI like:
               | 
               | "...natural [human] faculties themselves and this [human
               | body] organisation could also be improved?"
        
               | nabla9 wrote:
               | Check out _" Sketch for a Historical Picture of the
               | Progress of the Human Mind"_, by Marquis de Condorcet,
               | 1794. The last chapter, The Tenth epoch/The future
               | progress of the human mind. There he lays out unlimited
               | advance of knowledge, unlimited lifespan for humans,
               | improvement of physical faculties, and then finally
               | improvement of the intellectual and moral faculties.
               | 
               | And this was not some obscure author, but leading figure
               | in the French Enlightenment. Thomas Malthus wrote his
               | essay on population as counterargument.
        
         | gcr wrote:
         | with respect, we don't know if he was spot on. Companies
         | shoehorning language models into their products is a far cry
         | from the transformative societal change he describes will
         | happen. nothing like a singularity has yet happened at the
         | scale he describes, and might not happen without more
         | fundamental shifts/breakthroughs in AI research.
        
           | mnsc wrote:
           | Imagine the first llm to suggest an improvement to itself
           | that no human has considered. Then imagine what happens next.
        
             | dsr_ wrote:
             | OK. I'm imagining a correlation engine that looks through
             | code as a series of prompts that are used to generate more
             | code from the corpus that is statistically likely to
             | follow.
             | 
             | And now I'm transforming that through the concept of taking
             | a photograph and applying the clone tool via a light
             | airbrush.
             | 
             | Repeat enough times, and you get uncompilable mud.
             | 
             | LLMs are not going to generate improvements.
        
               | ben_w wrote:
               | Saying they definitely won't or they definitely will are
               | equally over-broad and premature.
               | 
               | I currently expect we'll need another architectural
               | breakthrough; but also, back in 2009 I expected no-
               | steering-wheel-included self driving cars no later than
               | 2018, and that the LLM output we actually saw in 2023
               | would be the final problem to be solved in the path to
               | AGI.
               | 
               | Prediction is hard, especially about the future.
        
               | jart wrote:
               | GPT4 does inference at 560 teraflops. Human brain goes
               | 10,000 teraflops. NVIDIA just unveiled their latest
               | Blackwell chip yesterday which goes 20,000 teraflops. If
               | you buy an NVL72 rack of the things, it goes 1,400,000
               | teraflops. That's what Jensen Huang's GPT runs on I bet.
        
               | ben_w wrote:
               | > GPT4 does inference at 560 teraflops. Human brain goes
               | 10,000 teraflops
               | 
               | AFAICT, both are guesses. The low-end estimate I've seen
               | for human brains are ~ 162 GFLOPS[0] to 10^28 FLOPS[1];
               | even just the model size for GPT-4 isn't confirmed,
               | merely a combination of human inference of public
               | information with a rumour widely described as a "leak",
               | likewise the compute requirements.
               | 
               | [0] https://geohot.github.io//blog/jekyll/update/2022/02/
               | 17/brai...
               | 
               | [1] https://aiimpacts.org/brain-performance-in-flops/
        
               | jart wrote:
               | They're not guesses. We know they use A100s and we know
               | how fast an A100 goes. You can cut a brain open and see
               | how many neurons it has and how often they fire.
               | Kurzweil's 10 petaflops for the brain (100e9 neurons *
               | 1000 connections * 200 calculations) is a bit high for me
               | honestly. I don't think connections count as flops. If a
               | neuron only fires 5-50 times a second then that'd put the
               | human brain at .5 to 5 teraflops it seems to me. That
               | would explain why GPT is so much smarter and faster than
               | people. The other estimates like 1e28 are measuring
               | different things.
        
               | mlyle wrote:
               | > I don't think connections count as flops. If a neuron
               | only fires 5-50 times a second then that'd put the human
               | brain at .5 to 5 teraflops it seems to me.
               | 
               | That assumes that you can represent all of the useful
               | parts of the decision about whether to fire or not to
               | fire in the equivalent of one floating point operation,
               | which seems to be an optimistic assumption. It also
               | assumes there's no useful information encoded into e.g.
               | phase of firing.
        
               | jart wrote:
               | Imagine that there's a little computer inside each neuron
               | that decides when it needs to do work. Those computers
               | are an implementation detail of the flops being provided
               | by neurons, and would not increase the overall flop
               | count, since that'd be counting them twice. For example,
               | how would you measure the speed of a game boy emulator?
               | Would you take into consideration all the instructions
               | the emulator itself needs to run in order to simulate the
               | game boy instructions?
        
               | mlyle wrote:
               | Already considered in my comment.
               | 
               | > Imagine that there's a little computer inside each
               | neuron that decides when it needs to do work
               | 
               | Yah, there's -bajillions- of floating point operation
               | equivalents happening in a neuron deciding what to do.
               | They're probably not all functional.
               | 
               | BUT, that's why I said the "useful parts" of the
               | decision:
               | 
               | It may take more than the equivalent of one floating
               | point operation to decide whether to fire. For instance,
               | if you are weighting multiple inputs to the neuron
               | differently to decide whether to fire now, that would
               | require multiple multiplications of those inputs. If you
               | consider whether you have fired recently, that's more
               | work too.
               | 
               | Neurons do all of these things, and more, and these
               | things are known to be functional-- not mere
               | implementation details. A computer cannot make an
               | equivalent choice in one floating point operation.
               | 
               | Of course, this doesn't mean that the brain is _optimal_
               | -- perhaps you can do far less work. But if we're going
               | to use it as a model to estimate scale, we have to
               | consider what actual equivalent work is.
        
               | jart wrote:
               | I see. Do you think this is what Kurzweil was accounting
               | for when he multiplied by 1000 connections?
        
               | mlyle wrote:
               | Yes, but it probably doesn't tell the whole story.
               | 
               | There's basically a few axes you can view this on:
               | 
               | - Number of connections and complexity of connection
               | structure: how much information is encoded about how to
               | do the calculations.
               | 
               | - Mutability of those connections: these things are
               | growing and changing -while doing the math on whether to
               | fire-.
               | 
               | - How much calculation is really needed to do the
               | computation encoded in the connection structure.
               | 
               | Basically, brains are doing a whole lot of math and
               | working on a dense structure of information, but not very
               | precisely because they're made out of meat. There's
               | almost certainly different tradeoffs in how you'd build
               | the system based on the precision, speed, energy, and
               | storage that you have to work with.
        
               | queuebert wrote:
               | Synapses might be akin to transistor count, which is only
               | roughly correlated with FLOPs on modern architectures.
               | 
               | I've also heard in a recent talk that the optic nerve
               | carries about 20 Mbps of visual information. If we
               | imagine a saturated task such as the famous gorilla
               | walking through the people passing around a basketball,
               | then we can arrive at some limits on the conscious brain.
               | This does not count the autonomic, sympathetic, and
               | parasympathetic processes, of course, but those could in
               | theory be fairly low bandwidth.
               | 
               | There is also the matter of the "slow" computation in the
               | brain that happens through neurotransmitter release. It
               | is analog and complex, but with a slow clock speed.
               | 
               | My hunch is that the brain is fairly low FLOPs but highly
               | specialized, closer to an FPGA than a million GPUs
               | running an LLM.
        
               | ben_w wrote:
               | > They're not guesses. We know they use A100s and we know
               | how fast an A100 goes.
               | 
               | And we _don 't_ know how many GPT-4 instances run on any
               | single A100, or if it's the other way around and how many
               | A100s are needed to run a single GPT-4 instance. We also
               | don't know how many tokens/second any given instance
               | produces, so multiple users may be (my guess is they are)
               | queued on any given instance. We have a rough idea how
               | many machines they have, but not how intensively they're
               | being used.
               | 
               | > You can cut a brain open and see how many neurons it
               | has and how often they fire. Kurzweil's 10 petaflops for
               | the brain (100e9 neurons * 1000 connections * 200
               | calculations) is a bit high for me honestly. I don't
               | think connections count as flops. If a neuron only fires
               | 5-50 times a second then that'd put the human brain at .5
               | to 5 teraflops it seems to me.
               | 
               | You're double-counting. "If a neuron only fires 5-50
               | times a second" = maximum synapse firing rate * fraction
               | of cells active at any given moment, and the 200 is what
               | you get from assuming it _could_ go at 1000 /second (they
               | can) but only 20% are active at any given moment (a bit
               | on the high side, but not by much).
               | 
               | Total = neurons * synapses/neuron * maximum synapse
               | firing rate * fraction of cells active at any given
               | moment * operations per synapse firing
               | 
               | 1e11 * 1e3 * 1e3 Hz * 10% (of your brain in use at any
               | given moment, where the similarly phrased misconception
               | comes from) * 1 floating point operation = 1e16/second =
               | 10 PFLOP
               | 
               | It currently looks like we need more than 1 floating
               | point operation to simulate a synapse firing.
               | 
               | > The other estimates like 1e28 are measuring different
               | things.
               | 
               | Things which may turn out to be important for e.g.
               | Hebbian learning. We don't know what we don't know. Our
               | brains are much more sample-efficient than our ANNs.
        
               | MAXPOOL wrote:
               | That's is based on old assumption of neuron function.
               | 
               | Firstly, Kurzweil underestimates the number connections
               | by order of magnitude.
               | 
               | Secondly, dentritic computation changes things.
               | Individual dentrites and the dendritic tree as a whole
               | can do multiple individual computations. logical
               | operations low-pass filtering, coincidence detection, ...
               | One neuronal activation is potentially thousands of
               | operations per neuron.
               | 
               | Single human neuron can be equivalent of thousands of
               | ANN's.
        
               | mechagodzilla wrote:
               | They _might_ generate improvements, but I'm not sure why
               | people think those improvements would be unbounded. Think
               | of it like improvements to jet engines or internal
               | combustion engines - rapid improvements followed by
               | decades of very tiny improvements. We've gone from 32-bit
               | LLM weights down to 16, then 8, then 4 bit weights, and
               | then a lot of messy diminishing returns below that.
               | Moore's is running on fumes for process improvements, so
               | each new generation of chips that's twice as fast manages
               | to get there by nearly doubling the silicon area and
               | nearly doubling the power consumption. There's a lot of
               | active research into pruning models down now, but mostly
               | better models == bigger models, which is also hitting all
               | kinds of practical limits. Really good engineering might
               | get to the same endpoint a little faster than mediocre
               | engineering, but they'll both probably wind up at the
               | same point eventually. A super smart LLM isn't going to
               | make sub-atomic transistors, or sub-bit weights, or
               | eliminate power and cooling constraints, or eliminate any
               | of the dozen other things that eventually limit you.
        
               | CuriouslyC wrote:
               | Saying that AI hardware is near a dead end because
               | Moore's law is running out of steam is silly. Even GPUs
               | are very general purpose, we can make a lot of progress
               | in the hardware space via extreme specialization,
               | approximate computing and analog computing.
        
               | jart wrote:
               | Bro, Jensen Huang just unveiled a chip yesterday that
               | goes 20 petaflops. Intel's latest raptorlake cpu goes 800
               | gigaflops. Can you really explain 25000x progress by the
               | 2x larger die size? I'm sure reactionary America wanted
               | Moore's law to run out of steam but the Taiwanese
               | betrayal made up for all the lost Moore's law progress
               | and then some.
        
               | Nokinside wrote:
               | Pro tip: If you want to know who is the king of AI chips,
               | compare FLOPS (or TOPS) per chip area, not FLOPS/chip.
               | 
               | As long as the bottleneck is the fab capacity as wafers
               | per hous, the number of operations per second per chip
               | area determines who will produce more compute with best
               | price. It's a good measure even between different
               | technology nodes and superchips.
               | 
               | Nvidia is leader for a reason.
               | 
               | If manufacturing capacity increases to match the demand
               | in the future, FLOPS or TOPS per Watt may become
               | relevant, but now it's fab capacity.
        
               | hock_ads_ad_hoc wrote:
               | Taiwanese betrayal? I'm not sure I understand the
               | reference.
        
               | jart wrote:
               | There's no reference. It's just a bad joke. What they did
               | was actually very good.
        
               | UniverseHacker wrote:
               | LLMs are so much more than you are assuming... text,
               | images, code are merely abstractions to represent
               | reality. Accurate prediction requires no less than
               | usefully generalizable models and deep understanding of
               | the actual processes in the world that produced those
               | representations.
               | 
               | I know they can provide creative new solutions to totally
               | novel problems from firsthand experience... instead of
               | assuming what they should be able to do, I experimented
               | to see what they can actually do.
               | 
               | Focusing on the simple mechanics of training and
               | prediction is to miss the forest for the trees. It's as
               | absurd as saying how can living things have any
               | intelligence? They're just bags of chemicals oxidizing
               | carbon. True but irrelevant- it misses the deeper fact
               | that solving almost any problem deeply requires
               | understanding and modeling all of the connected problems,
               | and so on, until you've pretty much encompassed
               | everything.
               | 
               | Ultimately it doesn't even matter what problem you're
               | training for- all predictive systems will converge on
               | general intelligence as you keep improving predictive
               | accuracy.
        
             | WillAdams wrote:
             | Yes, eventually one gets a series of software improvements
             | which eventually result in the best possible performance on
             | currently available hardware --- if one can consistently
             | get an LLM to suggest improvements to itself.
             | 
             | Until we get to a point where an AI has the wherewithal to
             | create a fab to make its own chips and then do assembly w/o
             | human intervention (something along the lines of Steve Jobs
             | vision of a computer factory where sand goes in at one end
             | and finished product rolls out the other) it doesn't seem
             | likely to amount to much.
        
             | jerf wrote:
             | LLM != AI.
             | 
             | An LLM is not going to suggest a reasonable improvement to
             | itself, except by sheerest luck.
             | 
             | But then next generation, where the LLM is just the
             | language comprehension and generation model that feeds into
             | something else yet to be invented, I have no guarantees
             | about whether that will be able to improve itself. Depends
             | on what it is.
        
             | microtherion wrote:
             | That may happen more easily than you're suggesting. LLMs
             | are masters at generating plausible sounding ideas with no
             | regard to their factual underpinnings. So some of those
             | computational bong hits might come up with dozens of
             | plausible looking suggestions (maybe featuring made up
             | literature references as well).
             | 
             | It would be left to human researchers to investigate them
             | and find out if any work. If they succeed, the LLM will get
             | all the credit for the idea, if they fail, it's them who
             | will have wasted their time.
        
           | angiosperm wrote:
           | It has, anyway, already had a profound effect on the IT job
           | market.
        
           | jart wrote:
           | > Within thirty years, we will have the technological means
           | to create superhuman intelligence.
           | 
           | Blackwell.
           | 
           | > o Develop human/computer symbiosis in art: Combine the
           | graphic generation capability of modern machines and the
           | esthetic sensibility of humans. Of course, there has been an
           | enormous amount of research in designing computer aids for
           | artists, as labor saving tools. I'm suggesting that we
           | explicitly aim for a greater merging of competence, that we
           | explicitly recognize the cooperative approach that is
           | possible. Karl Sims [22] has done wonderful work in this
           | direction.
           | 
           | Stable Diffusion.
           | 
           | > o Develop interfaces that allow computer and network access
           | without requiring the human to be tied to one spot, sitting
           | in front of a computer. (This is an aspect of IA that fits so
           | well with known economic advantages that lots of effort is
           | already being spent on it.)
           | 
           | iPhone and Android.
           | 
           | > o Develop more symmetrical decision support systems. A
           | popular research/product area in recent years has been
           | decision support systems. This is a form of IA, but may be
           | too focussed on systems that are oracular. As much as the
           | program giving the user information, there must be the idea
           | of the user giving the program guidance.
           | 
           | Cicero.
           | 
           | > Another symptom of progress toward the Singularity: ideas
           | themselves should spread ever faster, and even the most
           | radical will quickly become commonplace.
           | 
           | Trump.
           | 
           | > o Use local area nets to make human teams that really work
           | (ie, are more effective than their component members). This
           | is generally the area of "groupware", already a very popular
           | commercial pursuit. The change in viewpoint here would be to
           | regard the group activity as a combination organism. In one
           | sense, this suggestion might be regarded as the goal of
           | inventing a "Rules of Order" for such combination operations.
           | For instance, group focus might be more easily maintained
           | than in classical meetings. Expertise of individual human
           | members could be isolated from ego issues such that the
           | contribution of different members is focussed on the team
           | project. And of course shared data bases could be used much
           | more conveniently than in conventional committee operations.
           | (Note that this suggestion is aimed at team operations rather
           | than political meetings. In a political setting, the
           | automation described above would simply enforce the power of
           | the persons making the rules!)
           | 
           | Ingress.
           | 
           | > o Exploit the worldwide Internet as a combination
           | human/machine tool. Of all the items on the list, progress in
           | this is proceeding the fastest and may run us into the
           | Singularity before anything else. The power and influence of
           | even the present-day Internet is vastly underestimated. For
           | instance, I think our contemporary computer systems would
           | break under the weight of their own complexity if it weren't
           | for the edge that the USENET "group mind" gives the system
           | administration and support people!) The very anarchy of the
           | worldwide net development is evidence of its potential. As
           | connectivity and bandwidth and archive size and computer
           | speed all increase, we are seeing something like Lynn
           | Margulis' [14] vision of the biosphere as data processor
           | recapitulated, but at a million times greater speed and with
           | millions of humanly intelligent agents (ourselves).
           | 
           | Twitter.
           | 
           | > o Limb prosthetics is a topic of direct commercial
           | applicability. Nerve to silicon transducers can be made [13].
           | This is an exciting, near-term step toward direct
           | communcation.
           | 
           | Atom Limbs.
           | 
           | > o Similar direct links into brains may be feasible, if the
           | bit rate is low: given human learning flexibility, the actual
           | brain neuron targets might not have to be precisely selected.
           | Even 100 bits per second would be of great use to stroke
           | victims who would otherwise be confined to menu-driven
           | interfaces.
           | 
           | Neuralink.
           | 
           | ---
           | 
           | https://justine.lol/dox/singularity.txt
        
             | dingnuts wrote:
             | >> > Within thirty years, we will have the technological
             | means to create superhuman intelligence.
             | 
             | > Blackwell.
             | 
             | I'm fucking sorry but there is no LLM or "AI" platform that
             | is even real intelligence, today, easily demonstrated by
             | the fact that an LLM cannot be used to create a better LLM.
             | Go on, ask ChatGPT to output a novel model that performs
             | better than any other model. Oh, it doesn't work? That's
             | because IT'S NOT INTELLIGENT. And it's DEFINITELY not
             | "superhuman intelligence." Not even close.
             | 
             | Sometimes accurately regurgitating facts is NOT
             | intelligence. God it's so depressing to see commenters on
             | this hell-site listing current-day tech as ANYTHING
             | approaching AGI.
        
               | mlyle wrote:
               | You didn't read him correctly; he's not saying Blackwell
               | is AGI. I believe that he's saying that perhaps Blackwell
               | could be _computationally sufficient_ for AGI if  "used
               | correctly."
               | 
               | I don't know where that "computationally sufficient" line
               | is. It'll always be fuzzy (because you could have a very
               | slow, but smart entity). And before we have a working
               | AGI, thinking about how much computation we need always
               | comes down to back of the envelope estimations with
               | radically different assumptions of how much computational
               | work brains do.
               | 
               | But I can't rule out the idea that current architectures
               | have enough processing to do it.
        
               | jart wrote:
               | I don't use the A word, because it's one of those words
               | that popular culture has poisoned with fear, anger, and
               | magical thinking. I can at least respect Kurzweil though
               | and he says the human brain has 10 petaflops. Blackwell
               | has 20 petaflops. That would seem to make it capable of
               | superhuman intelligence to me. Especially if we consider
               | that it can focus purely on thinking and doesn't have to
               | regulate a body. Imagine having your own video card that
               | does ChatGPT but 40x smarter.
        
           | CuriouslyC wrote:
           | What we're seeing right now with LLMs is like music in the
           | late 30s after the invention of the electric guitar. At that
           | point people still have no idea how to use it so, so they
           | were treating it like an amplified acoustic guitar. It took
           | almost 40 years for people to come up with the idea of
           | harnessing feedback and distortion to use the guitar to
           | create otherworldly soundscapes, and another 30 beyond that
           | before people even approached the limit of guitar's range
           | with pedals and such.
           | 
           | LLMs are a game changer that are going to enable a new
           | programming paradigm as models get faster and better at
           | producing structured output. There are entire classes of app
           | that couldn't exist before because there there was a non-
           | trivial "fuzzy" language problem in the loop. Furthermore I
           | don't think people have a conception of how good these models
           | are going to get within 5-10 years.
        
             | blauditore wrote:
             | > Furthermore I don't think people have a conception of how
             | good these models are going to get within 5-10 years.
             | 
             | Pretty sure it's quite the opposite of what you're
             | implying: People see those LLMs who closely resemble actual
             | intelligence on the surface, but have some shortcomings.
             | Now they extrapolate this and think it's just a small step
             | to perfection and/or AGI, which is completely wrong.
             | 
             | One problem is that converging to an ideal is obviously
             | non-linear, so getting the first 90% right is relatively
             | easy, and closer to 100% it gets exponentially harder.
             | Another problem is that LLMs are not really designed in a
             | way to contain actual intelligence in the way humans would
             | expect them to, so any apparent reasoning is very
             | superficial as it's just language-based and statistical.
             | 
             | In a similar spirit, science fiction stories playing in the
             | near future often tend to have spectacular technology, like
             | flying personal cars, in-eye displays, beam travel, or mind
             | reading devices. In the 1960s it was predicted for the 80s,
             | in the 80s it was predicted for the 2000s etc.
        
               | PaulHoule wrote:
               | This book
               | 
               | https://www.amazon.com/Friends-High-Places-W-
               | Livingston/dp/0...
               | 
               | tells (among other things) a harrowing tale of a common
               | mistake in technology development that blindsides people
               | every time: the project that reaches an asymptote instead
               | of completion that can get you to keep spending resources
               | and spending resources because you think you have only 5%
               | to go except the approach you've chosen means you'll
               | never get the last 4%. It's a seductive situation that
               | tends to turn the team away from Cassandras who have a
               | clear view.
               | 
               | Happens a lot in machine learning projects where you
               | don't have the right features. (Right now I am chewing on
               | the problem of "what kind of shoes is the person in this
               | picture wearing?" and how many image classification
               | models would not at all get that they are supposed to
               | look at a small part of the image and how easy it would
               | be to conclude that "this person is on a basketball court
               | so they are wearing sneakers" or "this is a dude so they
               | aren't wearing heels" or "this lady has a fancy updo and
               | fancy makeup so she must be wearing fancy shoes". Trouble
               | is all those biases make the model perform better up to a
               | point but to get past that point you really need to
               | segment out the person's feet.)
        
               | CuriouslyC wrote:
               | You are looking at things like the failure of full self
               | driving due to massive long tail complexity, and
               | extrapolating that to LLMs. The difference is that full
               | self driving isn't viable unless it's near perfect,
               | whereas LLMs and text to image models are very useful
               | even when imperfect. In any field there is a sigmoidal
               | progress curve where things seem to move slowly at first
               | when getting set up, accelerate quickly once a framework
               | is in place, then start to run out of low hanging fruit
               | and have to start working hard for incremental progress,
               | until the field is basically mined out. Given the rate
               | that we're seeing new stuff come out related to LLMs and
               | image/video models, I think it's safe to say we're still
               | in the low hanging fruit stage. We might not achieve
               | better than human performance or AGI across a variety of
               | fields right away, but we'll build a lot of very powerful
               | tools that will accelerate our technological progress in
               | the near term, and those goals are closer than many would
               | like to admit.
        
           | throw1234651234 wrote:
           | Singularity doesn't necessarily rely on LLMs by any means.
           | It's just that communication is improving and the number of
           | people doing research is increasing. Weak AI is icing on top,
           | let alone LLMs, which are being shoe-horned into everything
           | now. VV clearly adds these two other paths:
           | o Computer/human interfaces may become so intimate that users
           | may reasonably be considered superhumanly intelligent.
           | o Biological science may find ways to improve upon the
           | natural                   human intellect.
           | 
           | https://edoras.sdsu.edu/~vinge/misc/singularity.html
        
           | jimbokun wrote:
           | Still has 6 years to be proven correct.
        
           | beambot wrote:
           | Probably just a question of time constant / zoom on your time
           | axis. When zoomed in up close, an exponential looks a lot
           | like a bunch of piecewise linear components, where big
           | breakthroughs just are a discontinuous changes in slope...
        
         | gumby wrote:
         | Just to clarify, the "singularity" conjectures a slightly
         | different and more interesting phenomenon, one _driven_ by
         | technological advances, true, but its definition was not those
         | advances.
         | 
         | It was more the second derivative of future shock: technologies
         | and culture that enabled and encouraged faster and faster
         | change until the curve bent essentially vertical...asymptotimg
         | to a mathematical singularity.
         | 
         | An example my he spoke of was that, close to the singularity,
         | someone might found a corporation, develop a technology, make a
         | profit from it, and then have it be obsolete by noon.
         | 
         | And because you can't see the shape of the curve on the other
         | side of such a singularity, people living on the other side of
         | it would be incomprehensible to people on this side.
         | 
         | Ray Lafferty's 1965 story "Slow Tuesday Night" explored this
         | phenomenon years before Toffler wrote "Future Shock"
        
           | ethagnawl wrote:
           | Here's a link to the full text of _Slow Tuesday Night_: https
           | ://web.archive.org/web/20060719184509/www.scifi.com/sci...
        
           | PaulHoule wrote:
           | Note that the "Singularity" turns up in the novel
           | 
           | https://en.wikipedia.org/wiki/Marooned_in_Realtime
           | 
           | where people can use a "Bobble" to freeze themselves in a
           | stasis field and travel in time... forward. The singularity
           | is some mysterious event that causes all of unbobbled
           | humanity to disappear leaving the survivors wondering, even
           | 10s of millions of years later, what happened. As such it is
           | one of the best pretenses ever in sci-fi. (I am left
           | wondering though if the best cultural comparison is "The
           | Rapture" some Christians believe in making this more of a
           | religiously motivated concept as opposed to sound futurism.)
           | 
           | I've long been fascinated by this differential equation
           | dx       -- = x^2       dt
           | 
           | which has solutions that look like                 x =
           | 1/(t0-t)
           | 
           | which notably blows up at time t0. It's a model of an
           | "intelligence explosion" where improving technology speeds up
           | the rate of technological process but the very low growth
           | when t [?] t0 could also be a model for why it is hard to
           | bootstrap a two-sided market, why some settlements fail, etc.
           | About 20 years ago I was very interested in ecological
           | accounting and wondering if we could outrace resource
           | depletion and related problems and did a literature search
           | for people developing models like this further and was pretty
           | disappointed not to find much also it did appear as a
           | footnote in the ecology literature here and there. Even
           | papers like
           | 
           | https://agi-conf.org/2010/wp-
           | content/uploads/2009/06/agi10si...
           | 
           | seem to miss it. (Surprised the lesswrong folks haven't
           | picked it up but they don't seem too mathematically inclined)
           | 
           | ---
           | 
           | Note I don't believe in the intelligence explosion because
           | what we've seen in "Moore's law" recently is that each
           | generation of chips is getting much more difficult and
           | expensive to develop whereas the benefits of shrinks are
           | shrinking and in fact we might be rudely surprised that the
           | state of the art chips of the new future (and possibly 2024)
           | burn up pretty quickly. It's not so clear that chipmakers
           | would have continued to invest in a new generation if
           | governments weren't piling huge money into a "great powers"
           | competition... That is, already we might be past the point of
           | economic returns.
        
             | tim333 wrote:
             | I'm also a bit sceptical of an intelligence explosion but
             | compute per dollar has increased in a steady exponential
             | way long before Moore's law and will probably continue
             | after it. There are ways to progress other than shrinking
             | transistors.
        
               | PaulHoule wrote:
               | Even though we understand a lot more about how LLMs work
               | and have cut resource consumption dramatically in the
               | last year we still know hardly anything so it seems quite
               | likely there is a better way to do it.
               | 
               | For one thing dense vectors for language seem kinda
               | insane to me. Change one pixel in a picture and it makes
               | no difference to the meaning. Change one letter in a
               | sentence and you can change the meaning completely so a
               | continuous representation seems fundamentally wrong.
        
             | dekhn wrote:
             | IMHO Marooned in Realtime is the best Vinge book. Besides
             | being a dual mystery novel, it really explores the
             | implications of bobble technology and how just a few hours
             | of technology development near the singularity can be
             | extreme.
        
               | PaulHoule wrote:
               | Yep. I like it better than _Fire Upon the Deep_ but I do
               | like both of them. I didn't like _A Deepness in the Sky_
               | as it was feeling kinda grindy like _Dune_. (I wish we
               | could just erase _Dune_ so people could enjoy all of
               | Frank Herbert's other novels of which I love even the bad
               | ones)
        
               | jerf wrote:
               | The first time I read _A Deepness In The Sky_ , I was a
               | bit annoyed, because I was excited for the A plot to
               | progress, and it felt like we were spending an awful lot
               | of time on B & C.
               | 
               | On a second read, when I knew where the story was going
               | and didn't need the frisson of resolution, I enjoyed it
               | much more. It's good B & C plot, and it all does tie in.
               | But arguably the pacing is off.
        
               | dekhn wrote:
               | Can you recommend a non-Dune Herbert book? I recall
               | seeing Dosadi when I was a kid in the sci fi section of
               | the library and just never picked it up. I generally like
               | hard sci-fi and my main issue with Dune was that it went
               | off into the weeds too many times.
        
               | PaulHoule wrote:
               | I like the Dosadi books, _Whipping Star_ , the short
               | stories in _Eye_ , _Eyes of Heisenberg_ , _Destination:
               | Void_ , _The Santaroga Barrier_ (which my wife hates),
               | _Under Pressure_ and _Hellstrom 's Hive_. If I had to
               | pick just one it might be _Whipping Star_ but maybe
               | _Under Pressure_ is the hardest sci-fi.
        
             | kanzure wrote:
             | from http://extropians.weidai.com/extropians.3Q97/4356.html
             | 
             | The bobble is a speculative technology that originated in
             | Vernor Vinge's science fiction. It allows spherical volumes
             | to be enclosed in complete stasis for controllable periods
             | of time. It was used in _The Peace War_ as a weapon, and in
             | _Marooned in Realtime_ as a way for humans to tunnel
             | through the Singularity unchanged.
             | 
             | As far as I know, the bobble is physically impossible.
             | However it may be possible to simulate its effects with
             | other technologies. Here I am especially interested in the
             | possibility of tunneling through the Singularity.
             | 
             | Why would anyone want to do that, you ask? Some people may
             | have long term goals that might be disrupted by the
             | Singularity, for example maintaining Danny Hillis's clock
             | or keeping a record of humanity. Others may want to do it
             | if the Singularity is approaching in an unacceptable manner
             | and they are powerless to stop or alter it. For example an
             | anarchist may want to escape a Singularity that is
             | dominated by a single consciousness. A pacifist may want to
             | escape a Singularity that is highly adversarial. Perhaps
             | just the possibility of tunneling through the Singularity
             | can ease people's fears about advanced technology in
             | general.
             | 
             | Singularity tunneling seems to require a technology that
             | can defend its comparatively powerless users against
             | extremely, perhaps even unimaginably, powerful adversaries.
             | The bobble of course is one such technology, but it is not
             | practical. The only realistic technology that I am aware of
             | that is even close to meeting this requirement is
             | cryptography. In particular, given some complexity
             | theoretic assumptions it is possible to achieve exponential
             | security in certain restricted security models.
             | Unfortunately these security models are not suitable for my
             | purpose. While adversaries are allowed to have
             | computational power that is exponential in the amount of
             | computational power of the users, they can only interact
             | with the users in very restricted ways, such as reading or
             | modifying the messages they send to each other. It is
             | unclear how to use cryptography to protect the users
             | themselves instead of just their messages. Perhaps some
             | sort of encrypted computation can hide their thought
             | processes and internal states from passive monitors. But
             | how does one protect against active physical attacks?
             | 
             | The reason I bring up cryptography, however, is to show
             | that it IS possible to defend against adversaries with
             | enormous resources at comparatively little cost, at least
             | in certain situations. The Singularity tunneling problem
             | should not be dismissed out of hand as being unsolvable,
             | but rather deserves to be studied seriously. There is a
             | very realistic chance that the Singularity may turn out to
             | be undesirable to many of us. Perhaps it will be unstable
             | and destroy all closely-coupled intelligence. Or maybe the
             | only entity that emerges from it will have the
             | "personality" of the Blight. It is important to be able to
             | try again if the first Singularity turns out badly.
             | 
             | and: http://lesswrong.com/lw/jgz/aalwa_ask_any_lesswronger_
             | anythi...
             | 
             | "I do have some early role models. I recall wanting to be a
             | real-life version of the fictional "Sandor Arbitration
             | Intelligence at the Zoo" (from Vernor Vinge's novel A Fire
             | Upon the Deep) who in the story is known for consistently
             | writing the clearest and most insightful posts on the Net.
             | And then there was Hal Finney who probably came closest to
             | an actual real-life version of Sandor at the Zoo, and Tim
             | May who besides inspiring me with his vision of
             | cryptoanarchy was also a role model for doing early
             | retirement from the tech industry and working on his own
             | interests/causes."
        
         | 0xdeadbeefbabe wrote:
         | > He wrote that he would be surprised if it occurred before
         | 2005 or after 2030.
         | 
         | Being surprised is also an exciting outcome. Was he thinking
         | about that too?
        
       | pontifier wrote:
       | Oh man, I sincerely hope he was signed up for cryonics. If there
       | was someone who deserved to see what the future holds, it was
       | him.
        
         | unnamed76ri wrote:
         | From what I've read, cryonics seems like a massive scam pulled
         | on rich people. The tissue damage in these frozen corpses is
         | extensive and irreparable.
        
           | gamblerrr wrote:
           | > irreparable
           | 
           | That's the gamble. I think you're right though, it's far
           | lower odds than the snake oil salesmen present.
        
             | adastra22 wrote:
             | The alternative of cremation is still lower odds.
        
           | niplav wrote:
           | What evidence do you base those beliefs on?
        
           | adastra22 wrote:
           | Then you obviously haven't read much about cryonics, which
           | involves vitrification rather than freezing to avoid such
           | tissue damage.
        
             | davidgerard wrote:
             | In real medical cryogenics, e.g., embryo preservation,
             | vitrification is spoken of as a kind of freezing, which, of
             | course, it is. Only cryonics advocates claim that
             | vitrification isn't a kind of freezing.
        
               | topynate wrote:
               | If the topic is tissue damage from sharp ice crystals,
               | it's pretty handy to draw the distinction between cooling
               | methods that cause that and ones that don't.
        
               | samatman wrote:
               | Yes, that's the relevant distinction in fact. Cryonics
               | are the former, not the latter. Multicellular cryonic
               | suspension is an unsolved problem after roughly the
               | blastocyst stage.
        
               | topynate wrote:
               | As of last year we're up to doing rat kidneys. They're
               | "heavily" damaged but they recover within a few weeks. To
               | be sure, there's a long way from that to near-perfectly
               | preserving a human brain, let alone a whole body.
               | 
               | https://www.statnews.com/2023/06/21/cryogenic-organ-
               | preserva...
        
         | niplav wrote:
         | Checking Alcor1 and the Cryonics Institute2 suggests no :-/
         | 
         | 1: https://www.alcor.org/news/ 2: https://cryonics.org/case-
         | reports/
        
       | Patrick_Devine wrote:
       | I just finished reading Children of the Sky and re-reading A
       | Deepness in the Sky. I've been finding with Vinge's work, along
       | with Iain Bank's works, a lot of it is better the second time
       | around. There's just so much to take in.
        
       | boffinAudio wrote:
       | I once worked with a guy who was a close personal friend of
       | Vernor, and I remember with much joy the enormous collection of
       | science fiction he (the friend) had at his place .. literally
       | every wall was covered in paperback shelves, and to my eyes it
       | was a wonderland.
       | 
       | I casually browsed every shelf, enamoured with the collection of
       | scifi .. until I got to what I can only describe as a Golden Book
       | Shrine Ensconced in Halo of Respect - a carefully maintained,
       | diligently laid out bookshelf containing every single thing
       | Vernor Vinge had written. Everything, the friend said, including
       | stuff that Vernor had shared with him that would never see the
       | light of day until after he passed away. I wonder about that guy
       | now.
       | 
       | It wasn't my first intro to Mr. Vinge, but it was my first intro
       | to the fanaticism and devotion of his fan base - that in itself,
       | was a unique phenomenon to observe. Almost religious.
       | 
       | Which, given Mr. Vinge's works, is awe-inspiring, ironic and
       | tragic at the same time.
       | 
       | For me, it was a singular experience, realizing that science
       | fiction literature as a genre was far more vital and important to
       | our culture than it was granted in the mainstream. (This was the
       | mid-90's)
       | 
       | Science Fiction authors are capable of inculcating much
       | inspiration and wonder in their fans yet "scifi" is often used in
       | a derogatory way among the literature cognescenti. Alas, this
       | myopia occludes a great value to society, and I thank Mr. Vinge -
       | and his fanboix - for bringing me to a place where I understood
       | it was okay to value science fiction as a motivational form. That
       | Golden Book Shrine Ensconced in Halo was itself a gateway to much
       | wonder and awe.
        
         | bsenftner wrote:
         | Science Fiction - the literature - is so different from all
         | other media forms of SciFi there needs to be a formal separate
         | of Science Fiction Literature from SciFi films, live action and
         | animated series, games, and comic books. These other forms,
         | SciFi, are the cartoon abbreviated to something else that is
         | fun, adventure but is not Science Fiction (Literature) and the
         | existential examination of how Science Changes Reality.
        
           | boffinAudio wrote:
           | Absolutely, in the same way that there are tabloid forms of
           | journalism, citizen, and authoritative forms, also.
           | 
           | For me the distinction is in the nature of speculation. If
           | you speculate about some facet, and it seems feasible but
           | fantastic, this is the event horizon at which the subject
           | becomes useful as well as entertaining. It was no doubt of
           | great utility to the original developers of satellites to
           | have had Arthur C. Clarkes' models in their minds.
           | 
           | However, its hardly viable to speculate about regular use of
           | teleportation or faster than light travel .. unless, of
           | course, we end up getting these things because some kid read
           | a story and decided it could be done, in spite of the rest of
           | the worlds feeling about it ..
        
       | gitfan86 wrote:
       | Having read rainbows end just a few years before COVID was
       | interesting.
        
       | turing_complete wrote:
       | Just a couple years before the singularity. Sad.
        
       | turing_complete wrote:
       | If you never read Vernor Vinge (except for his essay on the
       | Technological Singularity), what would be the best book to start?
        
         | geden wrote:
         | A Fire Upon The Deep, shortly followed by A Deepness In The Sky
         | (which is even more page turny but kinda requires AFUTD.
         | 
         | Across Real Time is also great and shorter.
        
           | db48x wrote:
           | I disagree with the notion that A Deepness in the Sky
           | requires having read A Fire Upon the Deep. In fact, I would
           | go so far as to say that each ends with an open question that
           | is answered by the other, so that no matter which one you
           | read first you will discover the answer in the second.
        
         | cwillu wrote:
         | A Fire Upon the Deep, without question.
        
         | angiosperm wrote:
         | _The Peace War_ is closer to home than _Fire_. Read both.
        
           | stormking wrote:
           | The Peace War is a mediocre adventure story followed by a
           | brilliant sequel, Marooned in Realtime.
        
         | loudmax wrote:
         | Agreed about _A Fire Upon The Deep_ , optionally followed by _A
         | Deepness In The Sky_. Those are classically styled hard science
         | fiction novels with spaceships and aliens. But much more
         | thoughtful than you might expect from a typical spaceships and
         | aliens scifi novel.
         | 
         | If you're looking for something more germane to present
         | concerns, _Rainbows End_ is about a near future where people 's
         | interaction with the world is mediated by augmented reality and
         | various forces are fighting over access to information.
         | 
         | And since I haven't seen it mentioned here yet, _Marooned In
         | Realtime_ is also really good.
         | 
         | But if you're looking for a single book, then you won't go
         | wrong with _A Fire Upon The Deep_.
        
         | rdl wrote:
         | True Names, particularly because it is short.
        
           | natechols wrote:
           | Seconded, and it touches on the key themes he developed
           | later. I love how a throwaway plot element became a central
           | part of an unrelated novel later, like he had more ideas than
           | he had time to fully explain.
        
       | adrianhon wrote:
       | A lot of love here for A Fire Upon the Deep (predicted fake news
       | via "the net of a thousand lies") and A Deepness in the Sky
       | (great depiction of cognitive enhancement, slower-than-light
       | interstellar trade), but less so for Rainbows End, which is
       | perhaps a less successful story but remains, after almost two
       | decades, the best description of what augmented reality games and
       | ARGs might do to the world.
        
         | liotier wrote:
         | > A Fire Upon the Deep (predicted fake news via "the net of a
         | thousand lies")
         | 
         | Predicted ? A Fire Upon the Deep published in 1993, at which
         | date Usenet was already mature and suffering such patterns -
         | although not at FaceTwitTok scale.
         | 
         | But still, I love Vinge's take on information entropy across
         | time, space and social networks. A Deepness in the Sky features
         | the profession of programmer-archaeologist and I'm here for
         | that !
        
           | KineticLensman wrote:
           | > Predicted ? A Fire Upon the Deep published in 1993, at
           | which date Usenet was already mature and suffering such
           | patterns
           | 
           | I still remember the moment when I realised that the galactic
           | network in 'Fire...' was in fact based on Usenet (which I
           | used heavily at the time), especially how it was low
           | bandwidth text (given the interstellar distances) and how it
           | had a fair number of nutters posting nonsense across the
           | galaxy ('the key insight is hexapodia'). Great author, who'll
           | be sadly missed.
        
             | db48x wrote:
             | Skrodes have six wheels, so...
        
         | floren wrote:
         | I recently re-read Rainbows End, and I think "do to the world"
         | is an appropriate phrasing. It's a strikingly unpleasant vision
         | of a world in which every space is 24/7 running dozens of
         | microtransaction AR games... I found the part where Juan walks
         | through the "amusement park" particularly effective, where
         | little robots would prance around trying to entice him into
         | interacting with them (which would incur a fee).
        
         | natechols wrote:
         | I think it's also one of the best descriptions of living at the
         | onset of massive, disruptive technological changes, and how
         | disorienting (and occasionally terrifying) this would feel. The
         | fundamental problem with that book, for me, is that the main
         | protagonist is (deliberately) an utterly loathsome individual,
         | who somehow ends up as a good guy but doesn't seem to do very
         | much learning or self-reflection.
        
       | denton-scratch wrote:
       | I found a text/plain copy of A Fire Upon The Deep, and read it in
       | a single sitting. I later found a paperback edition in a second-
       | hand bookshop, and bought it. I've since re-read it at least
       | twice.
       | 
       | I'm sorry he's died.
        
       | lycopodiopsida wrote:
       | What a great mind - the concept of computer archeology from "A
       | Deepness in the Sky" is something I often think about looking at
       | legacy code and maybe something our children will think about
       | even more often.
        
       | ois_ultra wrote:
       | Considering his novel Rainbows End is about a very sick author in
       | his 70s getting brought back to the world by modern technological
       | breakthroughs in the mid 2020s, I feel like we've let him down in
       | some way. Maybe he knew he was already sick, even back then,
       | maybe not. Your meticulous and inspiring level of detail will be
       | missed.
        
       | ptero wrote:
       | His larger works are getting a lot of praise (justifiably, I read
       | "A fire upon the deep" on a friend recommendation, then
       | everything else Vinge wrote), some of his short stories strongly
       | resonated with me, too.
       | 
       | The cookie monster is, IMO, a thought-provoking marvel.
        
         | dgacmu wrote:
         | Thank you for recommending this! I hadn't read it, and it was
         | delightful. It's online: https://www.ida.liu.se/~tompe44/lsff-
         | book/Vernor%20Vinge%20-...
        
       | ompogUe wrote:
       | Oh No! So Saddened! Found The Peace War at an airport book rack
       | in the late '80's. Then, found True Names at a library sale in
       | the early '90's and fell in love.
       | 
       | "Zones of Thought" was a really smart idea once you got it. I put
       | him as 2nd generation genius like "Hendrix -> Van Halen": "Asimov
       | -> Vinge". Also (only real nitpick ever), like Asimov, was a
       | little weak in dialogue and the poetry of words (comparing to
       | Tolkein and Heinlein), but they were literally both STEM
       | "Dr.s/PhD's".
       | 
       | Rainbow's End seemed to be the FAANG playbook for many years:
       | drones, AR (google glass on), haptics, autonomous automobiles,
       | and on and on. Was thinking literally yesterday about how the
       | biotech and architecture/earthquake ideas hadn't made it to today
       | (which is around when the story takes place), but the latency
       | issues seem to have been licked well before now.
       | 
       | Requiescat in Pace
        
       | nl wrote:
       | One of the true greats.
       | 
       |  _True Names_ is a better cyberpunk story than anything Gibson or
       | Neal Stephenson wrote.
       | 
       | Everyone mentions _A Fire Upon the Deep_ and _A Deepness in the
       | Sky_ which are some of the best sci fi ever written, but I think
       | _The Peace War_ is way underrate too (although it was nominated
       | for a Hugo award which it lost to _Neuromancer_ ).
       | 
       | RIP
        
         | adamgordonbell wrote:
         | The Bobbler was a strange idea. It made for a fun concept. I
         | think there was more than one story in that world, if I'm
         | remembering correctly.
         | 
         | Rainbow's End was very good!
        
           | phrotoma wrote:
           | The shift of the use of bobbles from Peace War to Marooned in
           | Realtime is _wild_. Fantastic stories, wildly creative,
           | delightfully different.
        
           | KineticLensman wrote:
           | > I think there was more than one story in that world
           | 
           | Two separate novels: 'The Peace War' and 'Marooned in
           | Realtime', sold collectively as 'Across Realtime'. Enjoyed
           | them both a lot, but for me 'Marooned...' had a more
           | emotional punch, especially as it becomes clearer what had
           | happened to the victim.
           | 
           | There is also a short story 'The Ungoverned' whose main
           | character is Wil W. Brierson, the protagonist in
           | 'Marooned...'.
           | 
           | Overviews without plot spoilers: 'The Peace War' describes a
           | near-future in which bobbles (apparently indestructible
           | stasis fields where time stands still) are used by 'hacker'
           | types to launch an insurrection against the state.
           | 'Marooned...' is set in the far future of the same world,
           | where bobbles are used to support one-way time travel further
           | into the future, where the few remaining humans try to
           | reconnect following the mysterious disappearance of 99.9% of
           | humanity. Both are high-concept SF, but 'Marooned...' also
           | has elements of police procedural where a low-tech detective
           | (Brierson) shanghaied into his future has to solve the slow
           | murder of a high-tech individual (someone from the far
           | future, relative to him).
        
         | tetris11 wrote:
         | _The Cookie Monster_ was one of the best short novella 's I
         | ever read, and its influence can be seen everywhere from Greg
         | Egan's _Permutation City_ to episodes of _Black Mirror_.
         | 
         | Edit: I got it backwards, Egan's book came out first.
        
         | jordanpg wrote:
         | _A Fire Upon the Deep_ and _A Deepness in the Sky_ are the
         | books that opened my eyes to the utter incomprehensibility and
         | weirdness of what intelligent alien life would really be like
         | if it 's out there.
         | 
         | I also credit the Transcend as being the first plausible,
         | secular explanation for "gods" that I ever came across back in
         | my militant atheist days.
         | 
         | These stories will be with me until I am gone, too. Thank you,
         | Vernor. RIP.
        
       | acdha wrote:
       | I'll echo everyone else saying you should read his books (and not
       | just Fire/Deepness) but wanted to note that while I never met him
       | in person, I know a few people who have and literally everyone
       | has started by describing him as one of the nicest people they
       | met. That seems like an accomplishment of its own.
        
       | hyperific wrote:
       | My introduction to Vinge was Rainbow's End. As a fellow San
       | Diegan raised in East County I found it hilarious that he
       | represented El Cajon as a wasteland.
        
       | salojoo wrote:
       | Vinge introduced me to space opera with zones of thought. Such
       | amazing books I've read multiple times.
        
       | masto wrote:
       | Vernor Vinge has an outstanding catalog of invention and
       | accomplishments. One of that's threaded through many of his
       | stories and has become even more relevant lately is ubiquitous
       | computing and networking, and in particular augmented reality.
       | 
       | To truly understand the transformative potential of something
       | like a VR headset if technology allowed it to be unobtrusive and
       | omnipresent, one must read Vinge. The idea of consensual reality
       | as portrayed in, e.g., Fast Times at Fairmont High, is kind of
       | mind-blowing.
        
       | RecycledEle wrote:
       | Rest In Peace, old man.
       | 
       | I loved A Fire Upon the Deep. It gave me many hours of pleasure
       | and many things to think about.
        
       | nanolith wrote:
       | His Zones of Thought series, especially, A Deepness in the Sky,
       | remain some of my favorite science fiction. This one hits hard.
        
       | griffey wrote:
       | I had the privilege to interview Vernor back in 2011, and
       | continued to have interactions with him on and off in the
       | intervening years. He was, as others have said, just immeasurably
       | kind and thoughtful. I'm sad that I'll not have the opportunity
       | to speak with him again.
        
         | ca98am79 wrote:
         | I emailed him out of the blue and asked him to write more
         | stories about Pham Nuwen. He replied and was really nice and we
         | corresponded over a couple of emails.
        
         | fl7305 wrote:
         | I had him as a CS teacher at SDSU for a class. I had no idea he
         | was a sci-fi author when I started the class. Bought his books
         | and was hooked.
         | 
         | He taught me how to implement OS thread context switching in
         | 68000 assembly language. We also had a lab where we had to come
         | up with a simple assembly function that executed slow or fast
         | depending on whether it used the cache efficiently or not.
         | 
         | Great teacher and author, and a very nice guy in general.
        
       | UniverseHacker wrote:
       | A true visionary, he will be missed. His fiction was
       | entertaining, but also filled with important lessons that will
       | help us prepare for the future in a better and more humane way.
       | Thank you Vernor Vinge.
        
       | vhodges wrote:
       | Ah, sad news indeed. I just finished re-reading The Peace War RIP
        
       | remram wrote:
       | @dang Can't this thread be titled "Vernor Vinge has died"? I feel
       | like this is the usual title for those. With this title it's not
       | obvious that the news is he died yesterday.
        
       | abraxas wrote:
       | This merits the black ribbon atop the HN banner, I think.
        
       | ViktorRay wrote:
       | After reading this and all the comments on this thread I think I
       | will pick up some of his books.
       | 
       | Too much science fiction nowadays is dystopian, cynical and
       | pessimistic. I don't have a problem with any individuals writing
       | stuff like that if they really want to. People should have the
       | freedom to write whatever they want. I just personally feel like
       | there is too much of the cynical pessimistic stuff being written
       | nowadays.
       | 
       | So seeing that Vernor Vinge wrote stores that portray science and
       | humanity in positive hopeful and optimistic ways makes me very
       | interested in reading his work.
        
         | djaychela wrote:
         | Yes, do! I've only read A Deepness in the Sky and A Fire Upon
         | The Deep but they were absolute joys, despite their
         | intimidating page count. Just mind-bendingly inventive and
         | continually interesting. I won't ruin anything for you, but as
         | a reader you make assumptions which then turn out not to be
         | true via progressive revelation as the books go on. Brilliant
         | stuff.
        
       | dmd wrote:
       | Please mirror, because more people should have a copy of this:
       | https://3e.org/vvannot
       | 
       | This is Vinge's _annotated_ copy of A Fire Upon the Deep. It has
       | all his comments and discussion with editors and early readers.
       | It provides an absolutely fascinating insight into his writing
       | process and shows the depth of effort he put into making sure
       | everything made sense.
        
         | NoMoreNicksLeft wrote:
         | The key insight always was hexapodia.
        
           | jerf wrote:
           | IIRC from the annotations (it's been a while), Vinge did not
           | _intend_ that Twirlip was right about everything; Twirlip was
           | merely meant to a representation of the weird things you used
           | to get on Usenet. But it worked out fairly well. (On the one
           | hand, this might technically be a spoiler, but on the other,
           | I think in practice even knowing this tidbit won 't actually
           | give anything away.)
           | 
           | (I'm glad someone linked to this. I actually bought the
           | annotated edition a while back and was reading it back in the
           | Palm Pilot era, I think, but I've lost it and never quite
           | finished it. So I'm happy to see it and have no qualms for
           | myself about grabbing it.)
        
         | thrtythreeforty wrote:
         | Thanks for mirroring this! This was only published on an old CD
         | for the '93 Hugo winners, and I had a devil of a time trying to
         | find a copy (inter-library-loan, etc) before realizing someone
         | had archived it on archive.org. It is indeed well worth the
         | time spent if you're a fan of _Fire_.
        
           | dmd wrote:
           | Is this annotated version on the archive.org CD? I couldn't
           | find it in https://archive.org/download/hugo_nebula_1993
        
             | EdwardCoffin wrote:
             | you have to open hugo.zip, or click on the _view contents_
             | link beside it
        
               | dmd wrote:
               | Yes, I did that. I see in there the vinge novel but NOT
               | the version with annotations.
        
               | EdwardCoffin wrote:
               | The annotations are there in the RTF files, but there is
               | something quirky about the format of those RTF files -
               | perhaps they predate standardization or something. If you
               | open one of the RTFs in a straight text editor like emacs
               | or vi, you'll see them. There was a bit of discussion
               | around this here, a few years ago, when this version was
               | re-released [1]
               | 
               | [1] https://news.ycombinator.com/item?id=24876236
        
               | dmd wrote:
               | Oh wow thanks!
        
         | toomuchtodo wrote:
         | https://web.archive.org/web/*/https://3e.org/vvannot
        
         | rcarmo wrote:
         | I actually read a Fire Upon The Deep over Christmas, and then
         | went on with the rest. The entire trilogy is pretty amazing.
        
           | dekhn wrote:
           | I really wish he had wrapped everything up.
        
             | _emacsomancer_ wrote:
             | Yes, I've been hoping for this; one amongst many reasons to
             | be sad that Vinge didn't live longer.
        
         | joshstrange wrote:
         | That's interesting but I found it it incredibly difficult to
         | read/parse through. I've read A Fire Upon the Deep many times
         | (the whole trilogy) but the comment syntax is not easy for me
         | to follow at all. There are snippets that make a little sense
         | but I don't think I could read this as-is.
        
         | e40 wrote:
         | I got this on CD-ROM back in the 90's. It was really fun
         | looking through stuff.
        
           | jhbadger wrote:
           | Yes, the Hugo-Nebula 1993 CD-ROM. That included some of the
           | earliest (some say _the_ earliest) examples of ebooks based
           | on current fiction (rather than on out-of-copyright classic
           | books). I have it myself still somewhere.
        
         | _emacsomancer_ wrote:
         | There's an interview with Vinge from 2009 [0] which contains a
         | screenshot [1] of him using Emacs with his home-brewed proto-
         | Org-mode annotation system (which appears in parent's link).
         | 
         | [0]:
         | https://web.archive.org/web/20170215121054/http://www.norwes...
         | 
         | [1]:
         | https://web.archive.org/web/20170104130412/http://www.norwes...
        
           | Casteil wrote:
           | Thank you - I love HN for things like this! A Fire Upon the
           | Deep is one of my favorite books/series. RIP
        
         | jrussino wrote:
         | Wow! This is the internet find of the week for me. How long
         | until this appears as its own post on the HN front page? Thanks
         | for mirroring.
        
         | dooglius wrote:
         | The read-first file says
         | 
         | > In this form, it is possible to read the story without being
         | bothered by the comments -- yet be able to see the comments on
         | demand. (Because of production deadlines I have not seen the
         | exact user interface for the Clarinet edition, and so some of
         | this discussion may be slightly inconsistent with details of
         | the final product.)
         | 
         | Did the final product not hold up, or is the page not
         | presenting it right?
        
       | mercutio2 wrote:
       | My favorite author of all time. 80 years is a good run, but I
       | wish he'd seen another 20.
       | 
       | I would've loved to read his reaction to the 2020s. Rainbow's End
       | is by far the best prediction of what this decade has been like,
       | from 30 years ahead.
       | 
       | I wish we'd gotten to read a few more books from Vinge.
        
       | Eliezer wrote:
       | Ow.
        
       | sl-1 wrote:
       | RIP. His work is excellent and deep
        
       | toomuchtodo wrote:
       | https://en.wikipedia.org/wiki/Vernor_Vinge
        
       | r00fus wrote:
       | The first chapter of Fire Upon the Deep is one of my favorites of
       | all time. I really some of the concepts introduced (ie, universal
       | constants aren't universal) to resolve Fermi paradox.
        
       | r00fus wrote:
       | The first chapter of Fire Upon the Deep is one of my favorites of
       | all time. I really love some of the concepts introduced (ie,
       | universal constants aren't universal) to resolve Fermi paradox.
        
       | joshstrange wrote:
       | I've read a lot of SciFi but there was something special about
       | Vernor Vinge IMHO. Something about the way he wrote and what he
       | wrote about that "unlocked" various concepts for me. I'd have to
       | sit down and think about them to list it all out but I can trace
       | my interests in a number of concepts back to his books.
        
       | 725686 wrote:
       | Don't know who Vernor Vinge was, but his name rocks!
        
         | fl7305 wrote:
         | He got his last name from his Norwegian ancestors.
        
         | r2_pilot wrote:
         | Do yourself a favor and check out any of the books in this
         | thread, if that's your jam. I woke up to sad news, but if
         | someone can be introduced to his work by his passing, then it
         | wouldn't be all bad news.
        
       | schoen wrote:
       | On Vernor Vinge's connection to free software:
       | 
       | https://lwn.net/Articles/310463/
        
         | fl7305 wrote:
         | He was no slouch when it came to programming.
         | 
         | He taught classes that going through the actual 68000 assembly
         | to perform the context switch between threads in an interrupt
         | service routine (copy the saved registers from the running
         | thread on the stack to a separate area, and overwrite them on
         | the stack with the registers from the thread you want to switch
         | to).
        
       | epivosism wrote:
       | +1 on the recs for his main work. I also wanted to mention that I
       | loved his book Tatja Grimm's world, too. It's great, alternate
       | world fantasy, but with Vingean depth of thought about what it
       | all might mean... Looking it up now, I see this is a rework of
       | what must have been a very early novel for him, based on a work
       | that came out in 1969!
       | 
       | Thinking about this too, I'm sure he did a great job as a
       | professor, supporting his family and teaching. But in addition,
       | he had this greater creative gift to reach millions, too! I think
       | this pattern probably applies to a lot of us. Working and doing
       | useful things during the day out of necessity... and like him, I
       | hope everyone on HN puts in the effort and time to do something
       | creative, too, and finds their audience. It'd have been a shame
       | of the creative side of Vinge had never gotten out!
        
       | nodesocket wrote:
       | I went to San Diego State and majored in CompSCi in 2006. Vernor
       | was a bit before my time but heard legendary stories. Rest in
       | peace.
        
         | fl7305 wrote:
         | I had a CS class with him in the 90s, the stories were true :)
        
           | nodesocket wrote:
           | Nice, let's go Aztecs in the tournament!
        
             | fl7305 wrote:
             | Saw Marshall Faulk attempt a critical two point conversion
             | at Jack Murphy stadium :)
        
       | TheMagicHorsey wrote:
       | This makes me so sad. He was my favorite sci-fi author. I was
       | looking forward to more books in the Fire Upon The Deep and
       | Deepness In The Sky universe.
        
       | supportengineer wrote:
       | RIP to author of one of my favorite books.
       | 
       | If Vernor Vinge doesn't deserve the black banner atop HN, then
       | nobody does.
        
       | TMWNN wrote:
       | No one else has mentioned what I think are his two greatest
       | insights besides the Singularity:
       | 
       | * _A Deepness in the Sky_ depicts a human interstellar
       | civilization thousands of years in the future, in which
       | superluminal travel is impossible (for the humans), so travelers
       | use hibernation to pass the decades while their ships travel
       | between systems. Merchants, including the ones the book portrays,
       | often revisit systems after a century or two, so see great
       | changes in each visit.
       | 
       | The merchants repeatedly find that once smart dust (tiny swarms
       | of nanomachines) are developed, governments _inevitably_ use them
       | for ubiquitous surveillance, which inevitably causes societal
       | collapse.  <https://blog.regehr.org/archives/255>
       | 
       | * In said future human society pretty much all software has
       | already been written; it's just a matter of finding it. So
       | programmer-archaeologists search archives and run code on
       | emulators in emulators in emulators as far back as needed.
       | <https://garethrees.org/2013/06/12/archaeology/>
       | 
       | (Heck, recently I migrated a VM to its third hypervisor. It has
       | been a VM for 15 years, and began as a physical machine more than
       | two decades ago.)
        
       | underlipton wrote:
       | Aw, man. This is a bummer, considering how deep into "replicating
       | Rainbows End" we are (despite everyone and their mother's
       | insistence that we try for a "Ready Player One" future). I find
       | it funny that it seems to be one of his least-liked novels,
       | because the concepts and characters it plays with have always
       | been more approachable and relatable - and less terrifying - than
       | in much of his other work (insofar as I can tell, being wary of
       | reading them).
       | 
       | I still maintain that Miyazaki needs to adapt RE before he heads
       | out himself: https://imgur.com/a/8PeXHlb
        
       | fnordpiglet wrote:
       | I like to think history is like the greatest book ever written.
       | When we are born we get to open it somewhere in the middle, spend
       | a while figuring out what's going on, then have to close it and
       | never know the end. But vinge clearly peaked ahead a few
       | chapters.
        
         | _0ffh wrote:
         | Amazing, I came up with the same thing. There's probably more
         | of us out there!
         | 
         | > then have to close it and never know the end
         | 
         | That bit has irked me to no end since I was young!
        
       | nxobject wrote:
       | I wonder whether, as a sci-fi writer from the golden age, it
       | would've been a blessing or a curse to have lived this far into
       | the 21st century.
        
       | pigeons wrote:
       | That's a black bar event.
        
       | hyperthesis wrote:
       | An exceptional author, _A Fire Upon the Deep_ was his very best
       | IMHO, and chapter 4 _its_ best:
       | https://www.baen.com/Chapters/-0812515285/0812515285___4.htm
       | 
       | A sensational adventure story, with the particular science-
       | fiction skill of communicating strangeness, effortlessly.
        
       | gildandstain wrote:
       | Such a loss! He fused good-hearted optimism and mindbending data-
       | architecture (and mind-architecture) into ripping plots.
       | 
       | And +1 for Marooned in Realtime. The awesome scale reminds me of
       | Greg Egan, but it highlights Vinge's particular genius for
       | imagining side-effects of the technological premise.
        
       ___________________________________________________________________
       (page generated 2024-03-21 23:00 UTC)