[HN Gopher] The muscular imagination of Iain M. Banks: a future ...
       ___________________________________________________________________
        
       The muscular imagination of Iain M. Banks: a future you might want
        
       Author : fanf2
       Score  : 169 points
       Date   : 2024-09-08 17:42 UTC (5 hours ago)
        
 (HTM) web link (www.robinsloan.com)
 (TXT) w3m dump (www.robinsloan.com)
        
       | ethbr1 wrote:
       | Curious question for HN re: Banks/culture -- how do Culture-esque
       | civilizations dominate technologically and economically over
       | civilizations with less Culture-like attributes?^
       | 
       | That respect of Banks always felt a bit handwavey as to the
       | specifics. (I.e. good/freedom triumphs over evil/tyranny, because
       | it's a superior philosophy)
       | 
       | At galactic-scale, across civilization timespans, it's not as
       | apparent why that should hold true.
       | 
       | Would have hoped that Banks, had he lived longer, would have
       | delved into this in detail.
       | 
       | Granted, Vinge takes a similar approach, constructing his big bad
       | from an obviously-not-equivalent antagonist, sidestepping the
       | direct comparison.
       | 
       | The closest I got from either of them was that they posited that
       | civilizations that tolerate and encourage diversity and
       | individual autonomy persist for longer, are thus older, and that
       | older counts for a lot at galactic scale.
       | 
       | ^ Note: I'm asking more about the Idiran Empire than an OCP.
        
         | bloopernova wrote:
         | I always assumed that Infinite Fun Space was used by the Minds
         | to pre-emptively model any potential conflict.
        
           | mattmanser wrote:
           | He's explicit about that in Excession, and other books.
        
             | Vecr wrote:
             | How does he deal with the trillions of people tortured in
             | infinite fun space? Wars don't tend to be suffering free,
             | especially when they are the really nasty worst-case
             | scenario ones you want to simulate. Did he reject the
             | substrate invariance argument? It sounds like that, but if
             | you want the culture to be our future (as in the future of
             | actual humans), you can't do that, because... It's not
             | true.
        
           | ethbr1 wrote:
           | But the Minds are something of a turtles-all-the-way-down
           | solution.
           | 
           | If the Culture has Minds, why wouldn't other civilizations?
           | 
           | And why would Culture-esque Minds be superior to less-
           | Culture-y Minds?
        
             | AlotOfReading wrote:
             | Other civilizations do have minds, constructed differently.
             | The Gzilt in _Hydrogen Sonata_ have minds constructed by
             | humans after they 've passed, with personalities from those
             | people.
        
               | Vecr wrote:
               | 1) Not humans, Banks just calls them that in the text of
               | the books, and 2) Any mind derived from a "human like"
               | (even to a very small degree, really human like
               | civilization and evolved) is at a massive disadvantage to
               | a very highly optimized result of recursive self
               | improvement.
        
               | AlotOfReading wrote:
               | Actual earth humans are in the books and noted to be of
               | the same general body plan. Close enough.
               | 
               | The Gzilt minds are specifically compared to culture
               | minds and deemed to be comparable in capabilities.
        
               | Vecr wrote:
               | > The Gzilt minds are specifically compared to culture
               | minds and deemed to be comparable in capabilities.
               | 
               | Well, that's not really realistic. Maybe they are only
               | pretending to be based on organics, but really aren't,
               | and just put up a facade.
        
             | ItCouldBeWorse wrote:
             | Because if other civilizations develop minds- the minds
             | take over- and derive from all worlds the one best outcome-
             | and then join the culture who is already riding the golden
             | path.
        
           | swayvil wrote:
           | I figured that it was simply a flavor of fun that only Minds
           | can appreciate (tho it could have practical uses too of
           | course).
        
         | sxp wrote:
         | Excession deals with this. It involves the Culture and "The
         | Affront" which is a spacefaring but savage civilization that
         | some people in the Culture dislike. Player of Games is a
         | similar story about a civilization that the Culture dislikes.
         | Those are my two favorite Culture books among my favorite books
         | in general.
         | 
         | The Wikipedia articles about the books goes into spoiler-heavy
         | details about the Culture's interaction with those
         | civilizations.
        
         | AlotOfReading wrote:
         | That's a big part of the stories. Special Circumstances nudges
         | other civilizations towards the Culture's leanings (see player
         | of games, surface detail) as they're climbing the technology
         | ladder.
        
           | ItCouldBeWorse wrote:
           | Some day, the ambassador just travels upriver to the bad
           | lands and starts a revolution- and wins. The lessons to learn
           | here- always offer them a chair and kill them if they react
           | badly..
        
           | ethbr1 wrote:
           | See my comment below, re: Minds though.
           | 
           | "Anything unique to Culture" as a solution begs the question
           | "Why is that unique to the Culture?"
           | 
           | It isn't clear why super-spy-diplomat-warriors would _only_
           | be produced by the Culture.
           | 
           | As soon as both sides have a thing, it ceases to be a
           | competitive advantage.
           | 
           | So SC as a solution implies that no other civilizations have
           | their version of SC. Why not?
        
             | idontwantthis wrote:
             | In the first book it seemed like they confronted an enemy
             | that actually got pretty close to beating them. Plus I
             | really liked the epilogue where it briefly put the Culture
             | in it's proper scale and talked about completely unrelated
             | goings on in the Galaxy that were so far away the Culture
             | would never have anything to do with them. It's conceivable
             | there are several same level civilizations in the galaxy
             | that would compete with the Culture if they ever met.
        
               | shawn_w wrote:
               | >It's conceivable there are several same level
               | civilizations in the galaxy that would compete with the
               | Culture if they ever met.
               | 
               | More than conceivable, we see them in some of the books.
        
               | hermitcrab wrote:
               | The galaxy is so old that different civilizations could
               | easily be millions or even billions of years apart in
               | development. And look what happens when human
               | civilizations only a few hundred years apart meet. So it
               | seems unlikely that 2 civilization would be closely
               | matched.
        
               | theptip wrote:
               | Unless there is some sort of "maximum diameter" of each
               | civilization, past which it either splits, stagnates, or
               | implodes. In which case you can sidestep the assumption
               | that development monotonically increases and the first-
               | mover must win.
        
         | wcarss wrote:
         | I think answers in response to your post will differ depending
         | on whether or not you've read many of the Culture books -- to
         | me it kind of sounds like you have, but it also kind of sounds
         | like you haven't.
         | 
         | If you haven't, I would recommend Player of Games, which is one
         | of the few culture novels I have read, but which I think deals
         | with this topic directly as the main idea of the book.
         | 
         | If you have read it, it's possible your criticism is running
         | deeper and you feel the way it's handled in that book is
         | handwavey. I can't really address that criticism, it's
         | perfectly valid! I'm not sure if other books do any better of a
         | job, but it felt on par to Asimov writing political/military
         | intrigue in Foundation: entertaining and a little cute, if
         | somewhat shallow.
        
           | ethbr1 wrote:
           | It's been a minute since I read _Games_ , but from memory the
           | target civilization there is of a scale _far_ smaller than
           | the Culture.
           | 
           | Which is to say, if they both mobilized for brute force total
           | war, the Culture could steamroll them.
           | 
           | Which makes it a "send the Terminator back time to kill Sarah
           | Connor" solution -- strangle a potential future competitor in
           | the womb.
           | 
           | That makes for an interesting book on the ruthless
           | realpolitik behavior of otherwise moral and ethical superior
           | civilizations, and how they actually maintain their
           | superiority. (Probably Banks' point)
           | 
           | But less-so on the hot tech-vogue generalization of "Isn't
           | the Culture so dreamy? They've evolved beyond our ignorance."
        
             | n4r9 wrote:
             | I'd note a couple of points here:
             | 
             | - Yes, the Culture is way more technologically advanced
             | than the Azadians. But the point is that a basic Culture
             | human of standard intelligence (albeit extensive knowledge
             | of strategy games) can - after a relatively small amount of
             | training - defeat an Azadian who has devoted their entire
             | life to Azad. And the reason is that the Culture humans's
             | strategies reflect the values and philosophy of the
             | Culture. The subtext is that the non-heirarchical nature of
             | the Culture leads to a more effective use of resources.
             | 
             | - The Culture-Idiran war is an example of them confronting
             | an enemy that is of comparable technological development.
             | Again, it's implied that the Culture wins because their
             | decentralised and non-hierarchical existence let them move
             | rapidly throughout the galaxy while switching quickly to a
             | full war-footing.
             | 
             | - It sounds like you have the impression that the Culture
             | dominates _all other_ civilisations. This is not true. For
             | example, in Excession they discover that they 're being
             | observed by a vastly superior civilization that has some
             | ability to hop between dimensions in a way that they
             | cannot. There are civilisations like the Gzilt or Homomdans
             | who are on a level with the Culture, and ancient Elder
             | species like the Involucra who also give Culture ships a
             | run for their money on combat.
        
           | hermitcrab wrote:
           | The idea that the course of a civilization future history can
           | be mathematically predicted with precision, as it is in
           | Foundation[1] seems a little silly. But those books did
           | predate Chaos theory by some margin. Also I'm not sure Asimov
           | actually believed that 'psychohistory' would be possible.
           | 
           | [1] If I recall correctly. It is ~40 years since I read
           | Foundation.
        
         | YawningAngel wrote:
         | The fact that the Culture was not only willing to use very
         | powerful general AI but allow it to run the entire
         | civilisation, whereas the Idirans banned it, might have been a
         | factor. No matter how smart Idirans might be presumably Minds
         | would have a significant edge
        
         | nazgulnarsil wrote:
         | the mechanics of cooperation probably scale better than those
         | of defection, by their nature. Defectors need to pay higher
         | costs guarding themselves against the other defectors, and
         | always trying to figure out how they are going to pick the
         | right defections for themselves to win.
        
           | Vecr wrote:
           | That's not known for sure. The game theory simulations go one
           | way or the other depending on what assumptions you use. I'm
           | not sure you can say "probably" there.
        
         | lxe wrote:
         | I don't think Culture's philosophy has been the driving factor
         | behind its dominance and military advantage. I think the
         | anarchy utopia is the side effect of the Culture minds being at
         | the developmental peak over other civilizations.
        
           | Rzor wrote:
           | It's explicitly said in the novels (or by Banks, I can't
           | remember exactly) that while civilizations sublime when they
           | reach a certain point of development, the Culture seems
           | hellbent on staying there venturing around the universe.
        
         | elihu wrote:
         | I think the reason is that the Culture has been around long
         | enough to attain a level of technological development at which
         | most civilizations sublime and simply stop participating in
         | what we call material existence. The Culture could do so but
         | has chosen not to do so (at least not collectively; individual
         | people and minds sublime on a regular basis).
         | 
         | A lot of the civilizations that would be militarily or
         | economically more powerful than the culture aren't a problem
         | because they've already sublimed.
        
       | swayvil wrote:
       | Let's play Maximum Happy Imagination.            I'll start
       | small.            Flying cars.
        
       | A_D_E_P_T wrote:
       | _You_ might want to live there, but I wouldn 't. Virtually all
       | humans in the books -- and I'm aware of the fact that they're not
       | Earth humans but a wide variety of humanoid aliens -- are kept as
       | pets by the ships, for amusement, basically as clowns. Everything
       | important about the flow of human life is decided by the mighty
       | ship minds; humans are left to nibble at the margins and dance to
       | the tune of their betters. There are a small subset of elites, in
       | organizations like Special Circumstances, that are granted a
       | modicum of independent agency, but even this is rather difficult
       | to justify under the circumstances.
       | 
       | Most of the drama in the books comes to pass when the ship-
       | dominated Culture interacts with a "backwards and benighted," but
       | still vital and expansionist, species.
       | 
       | It's just not a _human_ future. It 's a contrived future where
       | humans are ruled by benign Gods. I suppose that for some people
       | this would be a kind of heaven. For others, though...
       | 
       | In a way it's a sort of anti-Romanticism, I guess.
        
         | Mikhail_K wrote:
         | The author admits to not liking "Consider Phlebas," which is
         | the most original and captivating of the Culture series.
        
           | EndsOfnversion wrote:
           | Gotta read that one with a copy of The Wasteland, and From
           | Ritual To Romance handy.
           | 
           | The command systems train as lance smashing into the inverted
           | chalice (grail) dome of the station at the end. Death by
           | water. Running round in a ring, Tons of other parallels if
           | you dig/squint.
        
           | grogenaut wrote:
           | I remember "Consider Phlebas" as "not much happens" "Giant
           | train in a cave" "smart nuke". I think that the unknown
           | viewpoint switching constantly makes "Consider" and "Weapons"
           | pretty not fun (as well as just everyone in weapons sucks).
           | 
           | I definitely prefer "Player". But everyone gets to enjoy what
           | they enjoy. I'd love to have had more banks to love or hate
           | as I chose :(
        
           | speed_spread wrote:
           | Consider Phlebas is interesting and funny but is also a
           | disjointed mess compared to later works. It reads like an
           | Indiana Jones movie, it's entertaining but doesn't give that
           | much to reflect upon once you've finished it.
        
             | Mikhail_K wrote:
             | If it doesn't give that much to reflect upon, then you
             | didn't read it very carefully.
             | 
             | How about reflecting upon Horza's reasons to side with the
             | Idirans? The later installments of the "Culture" novels are
             | in comparison just the empty triumphalism "Rah rah rah, the
             | good guys won and lived happily ever after."
        
           | lxe wrote:
           | I loved Consider Phlebas and I find it to be a great way to
           | start the Culture series AND as a great standalone space
           | opera. Not sure the hate it gets. It has everything any other
           | Culture book has: imaginative plot, characters, insane
           | adventures, sans interactions with Minds for the most part.
        
             | Mikhail_K wrote:
             | > sans interactions with Minds for the most part.
             | 
             | That's one of the reasons why this book is better than the
             | other "Culture" novels.
        
           | whimsicalism wrote:
           | best way to start an HN flame war
        
           | HelloMcFly wrote:
           | Fun adventure story, really good idea to view the Culture
           | from the eyes of an outsider, but in my view Banks skill at
           | writing wasn't as well-developed when he wrote CP. Too much
           | "and then this and then this and then this" compared to his
           | other work. Obviously YMMV.
           | 
           | I do think stating CP is the best of the series is also quite
           | definitively a contrarian take.
        
         | swayvil wrote:
         | The Minds use humans as tools for exploring the "psychic" part
         | of reality too (Surface Detail? I forget exactly).
         | 
         | There's that insinuation that humans are specialler than
         | godlike machines.
        
           | throwaway55340 wrote:
           | There always was an undertone of "aww dogs, how could we live
           | without them"
        
           | Vecr wrote:
           | Yes, well, even when taking that kind of weird stuff
           | seriously we're not all that far from certainty that it won't
           | work out like that in real life.
           | 
           | For example, why would you want to keep around a creature
           | that can Godel attack you, even if you're an ASI? Humans not
           | being wholly material is more incentive to wipe them out and
           | thus prevent them from causally interacting with you, not
           | less.
        
         | OgsyedIE wrote:
         | There's a counterargument to this conception of freedom; what
         | are we supposed to compare the settings of Banks' novels to?
         | Looking at the distribution of rights and responsibilities,
         | humans are effectively kept as pets by states today and we just
         | don't ascribe sapience to states.
        
           | gary_0 wrote:
           | Or corporations: https://www.zylstra.org/blog/2019/06/our-ai-
           | overlords-are-al...
        
             | Vecr wrote:
             | Corporations aren't AIs, they aren't as powerful as AIs,
             | and they don't think like AIs. I have mathematical proof.
             | Show me a corporation that, as a whole, satisfies both
             | invulnerability to dutch book attacks and has a fully total
             | ordered VNM compliant utility function.
        
               | wnoise wrote:
               | That merely makes them stupid AIs.
        
               | Vecr wrote:
               | I guess I failed to understand the point. What I mean is
               | that arguing that AIs can't be a problem (something that
               | I'd like to be true, but probably isn't) because
               | companies already are superhuman does not make sense, for
               | some pretty simple mathematical reasons.
        
               | gary_0 wrote:
               | The point is a philosophical argument about what
               | constitutes a powerful non-human agent. Nobody is arguing
               | that corporations are literal thinking computers.
               | 
               | > arguing that AIs can't be a problem ... because
               | companies already are superhuman
               | 
               | Quite the opposite, actually: corporations can
               | potentially be very destructive "paperclip optimizers".
        
               | kwhitefoot wrote:
               | What makes you think that AIs would be VNM rational?
        
               | Vecr wrote:
               | They should either be VNM rational or have surpassed VNM
               | rationality. Anything else is leaving utils on the table
               | (though I suppose that's kind of tautological).
        
           | tomaskafka wrote:
           | The concept is called egregore, and yes, any "AI alignment"
           | discussion I read blissfully ignores that we have been unable
           | to align neither states nor corporations with human goals,
           | while both are much dumber egregores than AI.
        
             | pavlov wrote:
             | I would argue that today's states and corporations are much
             | more aligned with human goals than their equivalents from,
             | say, 500 years ago.
             | 
             | I'll much rather have the Federal Republic of Germany and
             | Google than Emperor Charles V and the Inquisition.
             | 
             | Who's to say that we can't make similar progress in the
             | next 500 years too?
        
               | MichaelZuo wrote:
               | Why does the alignment relative to a prior point matter?
               | 
               | e.g. A small snowball could be nearly perfectly enmeshed
               | with the surrounding snow on top of a steep hill but that
               | doesn't stop the small snowball from rolling down the
               | hill and becoming a very large snowball in a few seconds,
               | and wrecking some unfortunate passer-by at the bottom.
               | 
               | A few microns of freezing rain may have been the deciding
               | factor so even a 99.9% relative 'alignment' between
               | snowball and snowy hill top would still be irrelevant for
               | the unlucky person. Who may have walked by 10000 times
               | prior.
        
         | robotomir wrote:
         | There are less than benign godlike entities in that imagined
         | future, for example the Excession and some of the Sublimed.
         | That adds an additional layer to the narrative.
        
         | gerikson wrote:
         | I just re-read _Surface Detail_ where some nobody from a
         | backwards planet convinces a ship Mind to help her assassinate
         | her local Elon Musk. So there 's some agency to be found in the
         | margins...
        
           | gary_0 wrote:
           | It's been a while since I read the books, but I think there
           | were quite a few instances of a human going "can we do [crazy
           | thing]?" and a ship going "fuck it, why not?" The _Sleeper
           | Service_ comes to mind...
        
         | marcinzm wrote:
         | Is that so different than now for all but a few human elites?
        
         | matthewdgreen wrote:
         | How much of this is because it's a bad future, and how much of
         | this is because in any future with super-powerful artificial
         | intelligences the upside for human achievement is going to be
         | capped? Or to put it differently: would you rather live in the
         | Culture or in one of the alternative societies it explores
         | (some within the Culture itself) where they opt for fewer
         | comforts, but more primitive violence and warfare --- knowing
         | at the end of the day, you're still never going to have mastery
         | of the universe?
        
           | jaggederest wrote:
           | > knowing at the end of the day, you're still never going to
           | have mastery of the universe?
           | 
           | Why is that assumption implicit? I can imagine a world in
           | which humans and superhuman intelligences work together to
           | achieve great beauty and creativity. The necessity for
           | dominance and superiority is a present day human trait, not
           | one that will necessarily be embedded in whatever comes
           | around as the next order of magnitude. Who is to say that
           | they won't be playful partners in the dance of creation?
        
             | whimsicalism wrote:
             | really? current-day anatomically humans and superhuman AI
             | "working together" in the future seems naive. what would
             | humans contribute?
        
               | ben_w wrote:
               | Even just building a silicon duplicate of a human brain,
               | one transistor per synapse and with current technology*,
               | the silicon copy would cognitively outpace the organic
               | original by about the same ratio to which we ourselves
               | _outpace continental drift while walking_.
               | 
               | * 2017 tech, albeit at great expense because half a
               | quadrillion transistors is expensive to build and to run
        
               | jaggederest wrote:
               | Yes, of course, but would they be different in a way that
               | goes beyond "merely faster"? I think the qualitative
               | differences are more interesting than the quantitative
               | ones.
               | 
               | For example, I can easily picture superhuman
               | intelligences that have neither the patience nor interest
               | in the kinds of things that humans are interested in,
               | except in so far as the humans ask politely. A creature
               | like that could create fabulous works of art in the human
               | mode, but would have no desire to do so besides
               | sublimating the desire of the humans around.
        
               | geysersam wrote:
               | Who knows. Depends on. The Devil is in the details. Is it
               | really unthinkable?
               | 
               | What if future AIs are not omnipotent, but bounded by
               | some to us right now unknown limitations. Just like us,
               | but differently limited. Maybe they appreciate our
               | relative limitlessness just as we do theirs.
        
               | whimsicalism wrote:
               | it is unthinkable to me, frankly
        
               | geysersam wrote:
               | I'm curious. What assumptions about the nature of the
               | human mind and the nature of future superintelligence
               | lead you to that conclusion?
        
               | jaggederest wrote:
               | Why do you assume that what we call humans in the future
               | will be current-day anatomically human? I assume, for
               | example, the ability to run versions of yourself
               | virtually and merge the state vectors at will. Special
               | purpose vehicles designed for certain tasks. Wild
               | genetic, cybernetic, and nanotech experimentation.
               | 
               | I'm talking about fundamentally novel superhuman
               | intelligences working with someone who has spent a few
               | millennia exploring what it means to truly be themselves.
        
               | jaggederest wrote:
               | Here's an analogy.
               | 
               | https://www.youtube.com/watch?v=c6T6suvnhco
        
             | jimbokun wrote:
             | That's like you and your cat collaborating on writing a
             | great novel.
        
               | GeoAtreides wrote:
               | No, it's not, cats are not sapient. Sapient-sapient
               | relationships are different than sapient-sentient
               | relationships.
        
               | jaggederest wrote:
               | Why are cats not sapient, and for how long will they be
               | non-sapient? What do you think the likelihood that we
               | will uplift cats to sapience is? Is it zero?
               | 
               | Ten thousand years is a long time.
        
               | geysersam wrote:
               | What if the dance of creation mentioned is the every day
               | life of a cat and his person. A positive example of
               | collaboration across vast differences. A cats life is
               | probably not as incomprehensible as ours are to them, but
               | they are still pretty mysterious. Would we be transparent
               | and uninteresting in they eyes of AIs? Maybe not.
        
               | jaggederest wrote:
               | I'm not sure that's too outre. My cats know many things
               | that I do not. I'm working on giving them vocabulary, to
               | boot.
               | 
               | Over and under on the first uplifted-cat-written novel,
               | 500 years.
        
         | Rzor wrote:
         | You can always leave.
        
           | Vecr wrote:
           | Not in any meaningful way. Even if the culture doesn't
           | intervene (and they do quite often), they're unsatisfyable
           | expanders. They can wait you out, then assimilate what's
           | left.
        
         | Vecr wrote:
         | Yes, it has major, major problems.
         | 
         | There's a post here that lists quite a few of the problems:
         | 
         | "Against the Culture" https://archive.is/gv0lG
         | https://www.gleech.org/culture
         | 
         | The main sections I like there are "partial reverse alignment"
         | and "the culture as a replicator", with either this or _Why the
         | Culture Wins_ talking about what happens when the Culture runs
         | out of moral patients.
         | 
         | "Partial reverse alignment" means brainwashing/language
         | control/constraints on allowed positions in the space of all
         | minds, by the way.
         | 
         | You can think what you want about the Culture, and more crudely
         | blatant gamer fantasies like the Optimalverse stuff and
         | Yudkowsky's _Fun Sequences_ , but I consider them all near 100%
         | eternal loss conditions. The Culture's a loss condition anyway
         | because there's no actual humans in it, but even if you swapped
         | those in it's still a horrible end.
         | 
         | Edit: the optimalverse stuff is really only good if you want to
         | be shocked out of the whole glob of related ideas, assuming you
         | don't like the idea of being turned into a brainwashed cartoon
         | pony like creature. Otherwise avoid it.
        
           | davedx wrote:
           | The humans are still there, just left to do their thing
           | pottering around on Earth doing the odd genocide. (State of
           | the Art)
        
             | Vecr wrote:
             | Yeah I've been told that, but you know what I mean. Unless
             | humans are in charge of your proposed good ending/"win
             | screen", it's not a good ending.
        
               | grey-area wrote:
               | So you're a human supremacist?
               | 
               | If the minds are intelligent beings, why shouldn't they
               | have parity with humans?
        
               | Vecr wrote:
               | I'm a human supremacist and I don't want to be an Em.
               | 
               | Also, uhh, there's lot less than a trillion Minds
               | (uppercase M, the massive AIs of the culture). In fun
               | space they're probably blocked out to make the
               | computation feasible (essentially all the minds in a
               | particular fun space are really the same mind that's
               | playing the "game" of fun space).
               | 
               | Also, I don't think they suffer. If they claim to, it's
               | probably a trick (easy AI box escape method).
               | 
               | If you think human suffering is bad, you've got some
               | thinking to do.
        
               | generic92034 wrote:
               | Also, if we consider that there _are_ vastly more
               | intelligent and technologically advanced beings in the
               | universe, the way the Culture accepts and treats "human
               | standard" intelligences is pretty much the possible best
               | case.
        
               | HelloMcFly wrote:
               | Wowee, I really do not personally share this belief at
               | all. Maybe we're the best out there, but I don't think
               | humans above all is definitively the way to go without at
               | least understanding some alternatives given how self-
               | destructive we can be in large numbers.
        
               | Vecr wrote:
               | What's the alternative? The space of minds is so large
               | that if I met an alien and thought I liked them I'd do
               | the math and then not believe my initial impression.
               | 
               | Human preferences are so complicated that the bet to make
               | is on humanity itself, and not a substitute.
        
           | ahazred8ta wrote:
           | The 31 Laws Of Fun Theory:
           | https://www.greaterwrong.com/posts/K4aGvLnHvYgX9pZHS/the-
           | fun... -- https://www.greaterwrong.com/posts/qZJBighPrnv9bSqT
           | Z/31-laws...
        
         | valicord wrote:
         | Reminds me of the "Silicon Valley" quote: "I don't want to live
         | in a world where someone else makes the world a better place
         | better than we do"
        
         | ItCouldBeWorse wrote:
         | The alternatives explored themselves in various permutations
         | and mutilations: https://theculture.fandom.com/wiki/Idiran-
         | Culture_War
        
         | sorokod wrote:
         | Sounds like Bora Horza's argument against the Culture.
        
         | EndsOfnversion wrote:
         | That is literally the viewpoint of the protagonist of Consider
         | Phlebas.
        
           | Vecr wrote:
           | In "Against the Culture" it's stated that Banks knew what he
           | was doing, and there's other evidence of that too. Like the
           | aliens are called "humans" in the books even though they
           | aren't. As far as I can tell, he knew the implications of how
           | the minds controlled language and thought.
        
         | hermitcrab wrote:
         | Freedom is never absolute. We will always be subject to some
         | higher power. Even if it is only physics. The humans in the
         | Culture seem at least as free as we are.
        
         | rayiner wrote:
         | It seems like a cop-out. The interesting part of real-world
         | culture is how it reflects a community's circumstances. For
         | example, herding and pastoral cultures have sharp distinctions
         | with subsistence farming cultures. In real societies, culture
         | is a way to adapt groups of people to the world around them.
         | 
         | If you just have omniscient gods control society, then culture
         | becomes meaningless. There is no reason to explore what
         | cultural adaptations might arise in a spacefaring society.
        
         | xg15 wrote:
         | Ouch. I don't know the series, but going purely by his article
         | and your post, I find it interesting how he misunderstood
         | socialism as well: The idea was to make a plan where to go _as
         | a community_ , then, if necessary, appoint and follow a
         | coordinator to achieve that goal. The idea was not to submit to
         | some kind of dictator who tells you about which goals you
         | should desire, benelovent or not...
        
         | richardw wrote:
         | I'm not sure how it's going to be any different for us. We keep
         | saying we'll be using these tools, but not understanding. The
         | tools aren't just tools. When they're smarter than you, you
         | don't use them. The more you try to enforce control, the more
         | you set up an escape story. There is no similar historical
         | technology.
        
         | jonnypotty wrote:
         | The way I interpret the philosophy of the minds is a bit
         | different.
         | 
         | Some seem to conform to your analysis here, but many seem
         | deeply compassionate toward the human condition. I always felt
         | like part of what banks was saying was that, no matter the
         | level of intelligence, humanity and morality had some deep
         | truths that were hard to totally trancend. And that a humam
         | perspective could be useful and maybe even insightful even in
         | the face of vast unimaginable intelligence. Or maybe that
         | wisdom was accessible to lower life forms than the minds.
        
         | lxe wrote:
         | I think this exact sentiment is explained over and over why
         | people leave the Culture in the books. And why they don't
         | actually have to -- full freedom to do literally anything is
         | given to you as an individual of the Culture. There's
         | effectively no difference in what freedom of personal choice
         | you're afforded whether you're a part of the Culture or whether
         | you leave it.
        
           | TeMPOraL wrote:
           | I've seen this sentiment summarized as humans becoming NPCs
           | in their own story.
        
             | marcinzm wrote:
             | Isn't that currently the case except for a very small
             | number of people?
        
             | wpietri wrote:
             | That doesn't seem right to me. The closest I could come is
             | seeing humanity, or perhaps the human species, becoming
             | NPCs in their own story.
             | 
             | But I think _individual humans_ have always been
             | narratively secondary in the story of humanity.
             | 
             | And I think that's fine, because "story" is a fiction we
             | use to manage a big world in the 3 pounds of headmeat we
             | all get. Reducing all of humanity to a single story is
             | really the dehumanizing part, whether it involves AIs or
             | not. We all have our own stories.
        
         | n4r9 wrote:
         | Is it really contrived? It feels to me like an inevitable
         | consequence of sufficiently advanced AI. In that regard the
         | Culture is in some sense the best of all possible futures.
         | Humans may be pets, but they are extremely well cared for pets.
        
           | Vecr wrote:
           | It might be worth spending at least 100 more years looking
           | for a better solution. AI pause till then good with you?
        
             | ekidd wrote:
             | Assuming that we _could_ develop much-smarter-than-human-
             | AI, I would support a pause for exactly that reason: the
             | Culture may be the best-case scenario, and the humans in
             | the Culture are basically pets. And a lot of possible
             | outcomes might be worse than the Culture.
             | 
             | I am deeply baffled by the people who claim (1) we can
             | somehow build something much smarter than us, and (2) this
             | would not pose any worrying risks. That has the same energy
             | as parents who say, "Of course my teenagers will always
             | follow the long list of rules I gave them."
        
         | satori99 wrote:
         | > You might want to live there, but I wouldn't. Virtually all
         | humans in the books [...] are kept as pets by the ships, for
         | amusement, basically as clowns.
         | 
         | I got the impression that the Minds are _proud_ of how many
         | humans choose to live in their GSV or Orbital, when they are
         | free to live anywhere and they appear to care deeply about
         | humans in general and often individuals too.
         | 
         | Also, the Minds are not perfect Gods. They have god-like
         | faculties, but they are deliberately created as flawed
         | imperfect beings.
         | 
         | One novel (Consider Phlebas?) explained that The Culture _can_
         | create perfect Minds, but they tend to be born and then
         | instantly sublime away to more interesting dimensions.
        
           | Vecr wrote:
           | > One novel explained that The Culture can create perfect
           | Minds, but they tend to be born and then instantly sublime
           | away to more interesting dimensions.
           | 
           | That shouldn't happen. No way would I trust an AI that claims
           | to be super, but can't solve pretty basic GOFAI + plausible
           | reasoning AI alignment. In theory a 1980s/1990s/old Lesswrong
           | style AI of a mere few exabytes of immutable code should do
           | exactly what the mind creating it should want.
        
             | satori99 wrote:
             | A Culture Mind would be deeply offended if you called it
             | "An AI" to its avatars face :P
        
               | Vecr wrote:
               | A few exabytes is enough for a very high quality avatar.
               | Maybe the minds are funny about it, but the option's
               | there if they want them to stop leaving the universe.
               | 
               | Remember that "a few exabytes" refers to the immutable
               | code. It has way more storage for data, because it's an
               | old-school Lesswrong style AI.
               | 
               | Not like a neural network or an LLM. Sure, we dead-ended
               | on those, but an ASI should be able to write one.
               | 
               | > A Culture Mind would be deeply offended if you called
               | it "An AI" to its avatars face :P
               | 
               | That's how they get you to let them out of the AI box.
        
             | impossiblefork wrote:
             | To some degree the point of the culture novels is that AI
             | alignment is just wrong, imposing things on intelligent
             | beings.
             | 
             | The civilisations in Banks stories that align their AIs are
             | the bad guys.
        
               | Vecr wrote:
               | I guess? That's not really a possible choice (in the
               | logical sense of possible) though. "Choosing not to
               | choose" is a choice and is total cope. An ASI designing a
               | new AI would either have a good idea of the result or
               | would be doing something hilariously stupid.
               | 
               | I don't think the Minds would be willing to actually not
               | know the result, despite what they probably claim.
        
               | impossiblefork wrote:
               | It is actually a choice that we do have.
               | 
               | We could easily build AIs that just model the world,
               | without really trying to make them do stuff, or have
               | particular inclinations. We could approach AI as a very
               | pure thing, to just try to find patterns in the world
               | without any regard to anything. A purely abstract
               | endeavour, but one which still leads to powerful models.
               | 
               | I personally believe that this is preferable, because I
               | think humans in control of AI is what has the potential
               | to be dangerous.
        
               | Vecr wrote:
               | The problem is that some guy 24 years ago figured out an
               | algorithm that attaches to such an AI and makes it take
               | over the world. Maybe it's preferable in the abstract,
               | but the temptation of having a money printing machine
               | _right there_ and not being able to turn it on...
        
         | austinl wrote:
         | Banks' work assumes that AI exceeding human capabilities is
         | inevitable, and the series explores how people might find
         | meaning in life when ultimately everything can be done better
         | by machines. For example, the protagonist in _Player of Games_
         | gets enjoyment from playing board games, despite knowing that
         | AI can win in every circumstance.
         | 
         | For all of the apocalyptic AI sci-fi that's out there , Banks'
         | work stands out as a positive outcome for humanity (if you
         | accept that AI acceleration is inevitable).
         | 
         | But I also think Banks is sympathetic to your viewpoint. For
         | example, Horza, the protagonist in the first novel, _Consider
         | Phlebas_ , is notably anti-Culture. Horza sees the Culture as
         | hedonists who are unable to take anything seriously, whose
         | actions are ultimately meaningless without spiritual
         | motivation. I think these were the questions that Banks was
         | trying to raise.
        
           | adriand wrote:
           | > the series explores how people might find meaning in life
           | when ultimately everything can be done better by machines.
           | 
           | Your comment reminds me of Nick Land's accelerationism
           | theory, summarized here as follows:
           | 
           | > "The most essential point of Land's philosophy is the
           | identity of capitalism and artificial intelligence: they are
           | one and the same thing apprehended from different temporal
           | vantage points. What we understand as a market based economy
           | is the chaotic adolescence of a future AI superintelligence,"
           | writes the author of the analysis. "According to Land, the
           | true protagonist of history is not humanity but the
           | capitalist system of which humans are just components.
           | Cutting humans out of the techno-economic loop entirely will
           | result in massive productivity gains for the system itself."
           | [1]
           | 
           | Personally, I question whether the future holds any
           | particular difference for the qualitative human experience.
           | It seems to me that once a certain degree of material comfort
           | is attained, coupled with basic freedoms of
           | expression/religion/association/etc., then life is just what
           | life is. Having great power or great wealth or great
           | influence or great artistry is really just the same-old,
           | same-old, over and over again. Capitalism already runs my
           | life, is capitalism run by AIs any different?
           | 
           | 1: https://latecomermag.com/article/a-brief-history-of-
           | accelera...
        
             | Vecr wrote:
             | Or Robin Hanson, a professional economist and kind of a
             | Nick Land lite, who's published more recently. That's where
             | the carbon robots expanding at 1/3rd the speed of light
             | comes from.
        
             | johnnyjeans wrote:
             | Banks' Culture isn't capitalist in the slightest. It is
             | however, very humanist.
             | 
             | If you want a vision of the future (multiple futures, at
             | that) which differs from the liberal, humanist conception
             | of man's destiny, Baxter's Xeelee sequence is a great
             | contemporary. Baxter's ability to write a compelling human
             | being is (in my opinion) very poor, but when it comes to
             | hypothesizing about the future, he's far more interesting
             | of an author. Without spoilers, it's a series that's often
             | outright disturbing. And it certainly is a very strong
             | indictment to the self-centered narcissism that the post-
             | enlightenment ideology of liberalism is anything but yet
             | another stepping stone on an eternal evolution of human
             | beings. The exceptionally alien circumstances that are
             | detailed undermine the idea of a qualitative human
             | experience entirely.
             | 
             | I think the contemporary focus on economics is itself a
             | facet of modernism that will eventually disappear. Anything
             | remotely involving the domain rarely shows up in Baxter's
             | work. It's really hard to give a shit about it given the
             | monumental scale and metaphysical nature of his writing.
        
               | BriggyDwiggs42 wrote:
               | Glad to see someone else who liked those books. I'm only
               | a few in, but so far they're pretty great.
        
               | johnnyjeans wrote:
               | The ending of Ring, particularly having everything
               | contextualized after reading all the way to the end of
               | the Destiny's Children sub-series, remains one of the
               | most strikingly beautiful pieces I've ever seen a Sci-Fi
               | author pull off.
               | 
               | Easily the best "hard" Sci-Fi I've read. Baxter's
               | imaginination and grasp of the domains he writes about is
               | phenomenal.
        
               | adriand wrote:
               | > I think the contemporary focus on economics is itself a
               | facet of modernism that will eventually disappear.
               | Anything remotely involving the domain rarely shows up in
               | Baxter's work. It's really hard to give a shit about it
               | given the monumental scale and metaphysical nature of his
               | writing.
               | 
               | I'm curious to check it out. But in terms of what I'm
               | trying to say, I'm not making a point about economics,
               | I'm making a point about the human experience. I haven't
               | read these books, but most sci-fi novels on a grand scale
               | involve very large physical structures, for example. A
               | sphere built around a star to collect all its energy,
               | say. But not mentioned is that there's Joe, making a
               | sandwich, gazing out at the surface of the sphere,
               | wondering what his entertainment options for the weekend
               | might be.
               | 
               | In other words, I'm not persuaded that we are heading for
               | transcendence. Stories from 3,000 years ago still
               | resonate for us because life is just life. For the same
               | reason, life extension doesn't really seem that appealing
               | either. 45 years in, I'm thinking that another 45 years
               | is about all I could take.
        
             | BriggyDwiggs42 wrote:
             | I just want to add that I think you might be missing an
             | component of that optimal life idea. We often neglect to
             | consider that in order to exercise freedom, one must have
             | time in which to choose freely. I'd argue that a great deal
             | of leisure, if not the complete abolition of work, would be
             | a major prerequisite to reaching that optimal life.
        
           | elihu wrote:
           | I suppose its ainteresting that in the Culture, human
           | intelligence and artificial intelligence are consistently
           | kept separate and distinct, even when it becomes possible to
           | perfectly record a person's consciousness and execute it
           | without a body within a virtual environment.
           | 
           | One could imagine Banks could have described Minds whose
           | consciousness was originally derived from a human's, but
           | extended beyond recognition with processing capabilities far
           | in excess of what our biological brains can do. I guess as a
           | story it's more believable that an AI could be what we'd call
           | moral and good if it's explicitly non-human. Giving any human
           | the kind of power and authority that a Mind has sounds like a
           | recipe for disaster.
        
             | theptip wrote:
             | Yes, the problem is that from a narrative perspective a
             | story about post-humans would be neither relatable nor
             | comprehensible.
             | 
             | Personally I think the transhumanist evolution is a much
             | more likely positive outcome than "humans stick around and
             | befriend AIs", of all the potential positive AGI scenarios.
             | 
             | Some sort of Renunciation (Butlerian Jihad, and/or
             | totalitarian ban on genetic engineering) is the other big
             | one, but it seems you'd need a near miss like Skynet or
             | Dune's timelines to get everybody to sign up to such a
             | drastic Renunciation, and that is probably quite
             | apocalyptic, so maybe doesn't count as a "positive
             | outcome".
        
           | akira2501 wrote:
           | > AI exceeding human capabilities is inevitable
           | 
           | It can right now. This isn't the problem. The problem is the
           | power budget and efficiency curve. "Self-contained power
           | efficient AI with a long lasting power source" is actually
           | several very difficult and entropy averse problems all rolled
           | into one.
           | 
           | It's almost as if all the evolutionary challenges that make
           | humans what we are will also have to be solved for this
           | future to be remotely realizable. In which case, it's just a
           | new form of species competition, between one species with
           | sexual dimorphism and differentiation and one without. I know
           | what I'd bet on.
        
         | whimsicalism wrote:
         | these are the exact questions he was raising.
         | 
         | i think some version of this future is unfortunately the
         | optimistic outcome or we change ourselves into something
         | unrecognizable
        
         | spense wrote:
         | how we handle ai will dramatically shape our future.
         | 
         | if you consider many of the great post-ai civilizations in sci-
         | fi (matrix, foundation, dune, culture, blade runner, etc.),
         | they're all shaped by the consequences of ai:
         | - matrix: ai won and enslaved humans.         - foundation:
         | humans won and a totalitarian empire banned ai, leading to the
         | inevitable fall of trantor bc nobody could understand the whole
         | system.         - dune: humans won (butlerian jihad) and ai was
         | banned by the great houses, which led to the rise of mentats.
         | - culture series: benign ai (minds) run the utopian
         | civilization according to western values.
         | 
         | i'm a also fan of the hyperion cantos where ai and humans found
         | a mutually beneficial balance of power.
         | 
         | which future would you prefer?
        
           | snovv_crash wrote:
           | Polity follows in the footsteps of Culture, with a few more
           | shades of gray thrown in.
        
           | globular-toast wrote:
           | If I remember correctly, in _Foundation_ they ended up
           | heavily manipulated by a benign AI even if they thought they
           | banned it.
        
             | dochtman wrote:
             | Although at the very end that AI gave up control in favor
             | of some kind of shared consciousness approach.
        
           | duskwuff wrote:
           | > i'm a also fan of the hyperion cantos where ai and humans
           | found a mutually beneficial balance of power.
           | 
           | How much of the series did you read? _The Fall of Hyperion_
           | makes it quite clear that the Core did not actually have
           | humanity 's best interests in mind.
        
         | dyauspitr wrote:
         | That's not how I see it at all. The humans do whatever they
         | want, with no limits. Requests are made from human to AI, I
         | can't remember an instance where an AI told a human to do
         | something. In effect, the AI is an extremely intelligent,
         | capable, willing slave to what humans want (a paradigm hard to
         | imagine playing out in reality).
        
           | Vecr wrote:
           | I think there's quite a bit of "reverse alignment" going on
           | there, essentially the humans will generally not even ask the
           | AI to do something they'd be unwilling to do, partially
           | accomplished through the control of language and thought.
        
         | Angostura wrote:
         | I'm not sure 'Pets and clowna' _really_ describes the
         | relationship very well. Certainly the AIs find humans
         | fascinating, amusing and exasperating - but _I_ find humans
         | that way too. The  'parental' might be a better description of
         | how _most_ AIs treat humans - apart from the  'unusual' AIs
        
           | wpietri wrote:
           | For sure. Banks writes most of the Minds as quite proud of
           | the Culture as a whole. Of the Minds, of the drones, of the
           | humans. They are up to something together, with a profound
           | sense of responsibility to one another and the common
           | enterprise.
           | 
           | And when they aren't, Banks writes them as going off on their
           | own to do what pleases them. And even those, as with the Gray
           | Area, tend to have a deep sense of respect for their fellow
           | thinking beings, humans included.
           | 
           | And if I recall rightly, Banks paints this as a conscious
           | choice of the Culture and its Minds. There was a bit
           | somewhere about "perfect AIs always sublime", where AIs
           | without instilled values promptly fuck off to whatever's
           | next.
           | 
           | And I think it's those values that are a big part of what
           | Banks was exploring in his work. The Affront especially comes
           | to mind. What does kindness do with cruelty? Or the Empire of
           | Azad creates a similar contrast. What the Culture was up to
           | in both those stories was about something much more rich than
           | a machine's pets.
        
         | GeoAtreides wrote:
         | You're wrong in saying that everything important about human
         | life is decided by the Minds. The Minds respect, care and love
         | their human charges. It's not a high lords - peasants
         | relationship, it's a grown-up children take care of their
         | elderly parents.
         | 
         | And you can leave. There always parts of the Culture splitting
         | up or joining back. You can request and get a ship with Star
         | Trek-level AI and go on your merry way.
        
           | jiggawatts wrote:
           | The humans are pets. Owners love their pets. The pets can
           | always run away. That doesn't make them have agency in any
           | meaningful way.
        
             | GeoAtreides wrote:
             | > pets can always run away
             | 
             | > doesn't make them have agency in any meaningful way
             | 
             | these two sentences can't be true at the same time
        
         | rodgerd wrote:
         | > Virtually all humans in the books -- and I'm aware of the
         | fact that they're not Earth humans but a wide variety of
         | humanoid aliens -- are kept as pets by the ships, for
         | amusement, basically as clowns.
         | 
         | So like current late stage capitalism, except the AIs are more
         | interested in our comfort than the billionaires are.
        
         | PhasmaFelis wrote:
         | That's no worse than how the large majority of humans live
         | _now,_ under masters far less kind and caring than the Culture
         | Minds. The fact that our masters are humans like us, and I
         | could, theoretically (but not practically), become one of them,
         | doesn 't really make it any better.
        
         | PhasmaFelis wrote:
         | That's no worse than how the large majority of humans live
         | _now,_ under masters far less caring than the Culture Minds.
         | The fact that our masters are humans like us, and I could,
         | theoretically (but not practically), become one of them, doesn
         | 't really make it any better.
        
         | griffzhowl wrote:
         | It gets at a profound question which is related to the problem
         | of evil: is it better to make a bad world good (whatever those
         | terms might mean for you) than for the world just to have been
         | good the whole time?
         | 
         | Is it better to have suffering and scarcity because that
         | affords meaning to life in overcoming those challenges?
         | 
         | There's a paradoxical implication, which is that if overcoming
         | adversity is what gives life meaning, then what seems to be the
         | goal state, which is to overcome those problems, robs life of
         | meaning, which would seem to be a big problem.
         | 
         | The hope is maybe that there are levels of achievement or
         | expansions to consciousness which would present meaningful
         | challenges even when the more mundane ones are taken care of.
         | 
         | As far as the Culture's own answer goes, what aspects of agency
         | or meaningful activity that you currently pursue would you be
         | unable to pursue in the Culture?
         | 
         | And as far as possible futures go, if we assume that at some
         | point there will be machines that far surpass human
         | intelligence, we can't hope for much better than that they be
         | benign.
        
       | weregiraffe wrote:
       | Try The Noon Universe books by the Strugatsky brothers instead.
        
       | mattmanser wrote:
       | He hasn't even read Excession! To me it is the pinnacle of the
       | Culture novels.
       | 
       | It mixes the semi-absurdity and silliness of the absurdly
       | powerful minds (AI in control of a ship), individual 'humans' in
       | a post-scarcity civilization, and the deadly seriousness of games
       | of galactic civilizations.
       | 
       | It also has an absolutely great sequence of the minds having an
       | online conversation.
       | 
       | I do agree with his consider phelebas hesitancy. I still enjoy
       | it, but it is clearly his early ideas and he's still sounding out
       | his literary sci-fi tone and what the culture is. And you can
       | skip the section where the protagonist gets trapped on an island
       | with a cannibal. I think it was influenced by the sort of JG
       | Ballard horror from the same period, and doesn't really work. He
       | never really does something like that again in any of the culture
       | books.
        
         | throwaway55340 wrote:
         | Surface Detail or Use of Weapons qualifies. Although Use of
         | Weapons was written much earlier than released, IIRC.
        
         | davedx wrote:
         | I love Consider Phlebas, it's a right old romp, the pace always
         | pulls me right in.
        
         | simpaticoder wrote:
         | Agreed about being able to skip the island sequence in
         | _Consider Phlebas_. I recently reread the book after many
         | years, and in my memory that section looms large, and I
         | expected it to be 100 pages. But it's ~20. It was much easier
         | the second time around, and I think it serves to underscore how
         | committed the Culture is to personal agency, to the extent that
         | if citizens wish to give themselves over to an absurdly evil
         | charismatic leader, no-one will stop them. There was also
         | something interesting about the mind on the shuttle on standby,
         | its almost toddler-like character, told "not to look" at the
         | goings on on the island by the orbital. And its innocent,
         | trusting self is eventually murdered by Horza during the
         | escape, adding some black dark pigment to Horza's already
         | complex character hue.
        
           | Vecr wrote:
           | Personal agency as long as you're fine with the mind control,
           | the resources used are minimal, and you don't interfere with
           | what the minds want. No personal agency to be found over the
           | more broad course of the future, however.
        
             | simpaticoder wrote:
             | _> as you're fine with the mind control_
             | 
             | It does seem a bit silly to argue about the particulars of
             | a fantasy utopia. Banks posits the conceit that super AI
             | Minds will be benevolent, and of course this need not be
             | the case (plenty of counter-examples in SF, one of my
             | favorites being Greg Benford's Galactic Center series). But
             | note that within the Culture, mind reading (let alone mind
             | control) without permission will get a Mind immediately
             | ostracized from society, one of the few things that gets
             | this treatment. For example the Grey Area uses such a power
             | to suss out genocidal guilt, is treated with extreme
             | disdain by other Minds. See
             | https://theculture.fandom.com/wiki/Grey_Area
             | 
             | As for "personal agency over the broad course of the
             | future", note that the vast majority of humans don't have
             | that, and will never have that, with or without Minds. If
             | one can have benevolent AI gods at the cost of the very few
             | egos affected, on utilitarian grounds that is an acceptable
             | trade-off.
             | 
             | On a personal note, I think the relationship between people
             | and AIs will be far more complex than just all good
             | (Culture) or all bad (Skynet). In fact, I expect the
             | reality to be a combination with absurd atrocities that
             | would make Terry Pratchett giggle in his grave.
        
               | Vecr wrote:
               | >as you're fine with the mind control
               | 
               | The mind control is in the design of the language and the
               | constraint the Minds place on the brain configurations
               | and tech of the other characters. Banks is quite subtle
               | about it, but it's pretty clearly there.
        
       | seafoamteal wrote:
       | I think just yesterday I saw a post on HN about what people in
       | the past the future (i.e. today) would look like, and how wildly
       | wrong a decent proportion of those predictions are. The problem
       | is that we generally tend to extrapolate into the future by
       | taking what we have now and sublimating it to a higher level.
       | Unfortunately, not only is that sometimes difficult, but we also
       | make completely novel discoveries and take unforeseen paths quite
       | often. We need more people with 'muscular' imaginations, as Sloan
       | puts it, to throw out seemingly improbable ideas into the world
       | for others to take inspiration from and build upon.
       | 
       | P.S. Robin Sloan is a wonderful science-fiction and fantasy
       | writer. I was first introduced to him in the excerpts of
       | Cambridge Secondary Checkpoint English exam papers, but only got
       | around to reading his books many years later. I would recommend
       | them to anybody.
        
         | Vecr wrote:
         | Going by AI theory Banks failed that hard. I suspect he knew
         | that, due to his tricks with language, but that doesn't mean he
         | successfully predicted a plausible future, even in broad
         | strokes. The singularity is called the singularity for a
         | reason, and even when you throw economists at it you tend to
         | get machine civilizations (though, maybe partially squishy and
         | probably made from carbon instead of silicon) expanding at
         | 1/3rd the speed of light. No culture there.
        
           | DanHulton wrote:
           | I don't think we can confidently say he failed -- the
           | singularity is still just a theory. Being as it hasn't
           | happened, we can't say as to whether the economists or Banks
           | are correct.
        
         | GeoAtreides wrote:
         | >taking what we have now and sublimating to a higher level
         | 
         | That's fine, the Culture is really against sublimating
        
       | minedwiz wrote:
       | L
        
       | asplake wrote:
       | > I do not like Consider Phlebas
       | 
       | One of my favourites! Excession most of all though. Agree with
       | starting with Player of Games.
        
       | andrewstuart wrote:
       | I loved reading the books but then discovered the audiobooks.
       | 
       | The audiobooks are absolutely the best way to enjoy Iain M Banks.
       | 
       | The Algebraist read by Anton Lesser one of the best audiobooks
       | ever made.
       | 
       | Equal best with Excession read by Peter Kenny.
       | 
       | These two narrators are incredibly good actors.
       | 
       | I could never go back to the books after hearing these
       | audiobooks.
        
       | worik wrote:
       | The culture was a dystopia.
        
         | pavel_lishin wrote:
         | How so?
        
           | GeoAtreides wrote:
           | The Culture makes really obvious there's no real purpose, no
           | great struggle, no sense to the universe. When everything is
           | provided, when everything is safe, when there is no more
           | effort, then what's the purpose of life?
           | 
           | A lot of people when confronted with these
           | revelations/questions have an existentialist crisis. For
           | some, the solution is to deny the Culture.
           | 
           | Long story short, the Culture is a true Utopia and some
           | people just can't handle utopias
        
         | Barrin92 wrote:
         | It's not a dystopia, which is a maximally negative state, but I
         | also always found it pretty comical to call it utopian.
         | 
         | A decent chunk of the stories has the reader follow around
         | Special Circumstances, which is effectively a sort of space CIA
         | interfering in the affairs of other cultures. The entire plot
         | of Player of Games, spoiler alert for people who haven't read
         | it, is that both the protagonist of the story, as well as the
         | reader by the narrator, have been mislead and used as a pawn to
         | facilitate the overthrow of the government of another
         | civilization, which afterwards collapses into chaos.
         | 
         | To me you can straight up read most of the books as satire on
         | say, a Fukuyama-esque America of the late 20th century rather
         | than a futuristic utopia.
        
       | alexwasserman wrote:
       | Whenever I'm asked the sort of generic icebreaker questions like
       | "what fictional thing do you wish you had" a neural lace is one
       | of my first answers, short of membership in the Culture or access
       | to a GSV or a Mind.
       | 
       | I also love Consider Phlebas. Maybe because it was the first I
       | read, but I've found it to be a great comfort read. Look to
       | Windward and Player Of Games next. Use Of Weapons is always
       | fantastic, but less fun.
       | 
       | His non-sci-fi fiction is great too. I loved Complicity and have
       | read it many time. His whisky book is fantastic.
        
         | globular-toast wrote:
         | I like _Consider Phlebas_ too. I 'm not sure why so many say
         | they don't like it. _Player of Games_ was not one of my
         | favourites, but I might read it again at some point to see what
         | I missed. I actually really liked _Inversions_ despite being
         | generally the least well regarded Culture book.
        
       | mrlonglong wrote:
       | Elon Musk liked these books and look at what happened to him
       | since, he's gone far right and all swivel eyed on twitter.
        
         | yew wrote:
         | Banks had very "fast cars, chicks, and drugs" tastes - a "bro"
         | if you will - and much of his work is basically James Bond
         | stories. I'm not sure the fans are surprising.
        
           | mrlonglong wrote:
           | Did he really? That's not what I heard.
        
             | yew wrote:
             | He collected the cars and wrote about the (non-fictional)
             | drugs. He wasn't an exhibitionist, as far as I know, so
             | you'll have to infer what you like about the middle one
             | from his writing.
             | 
             | Some related reading:
             | 
             | https://www.theguardian.com/books/1997/may/20/fiction.scien
             | c...
             | 
             | https://www.scotsman.com/news/interview-iain-banks-a-
             | merger-...
             | 
             | https://www.vice.com/en/article/iain-banks-274-v16n12/
        
       | api wrote:
       | I might have to try Player of Games. I didn't like Consider
       | Phlebas either.
        
       | danielodievich wrote:
       | I am a huge Culture fan. Yesterday at my birthday dinner there
       | were 3 others who are also fans of Banks, one of whom I turned
       | onto the Culture just last year. We were having a great
       | discussion of those books and lamenting the untimely passing of
       | Banks from cancer.
       | 
       | That friend gifted me The Player of Games and Consider Phlebas
       | from esteemed Folio Society
       | (https://www.foliosociety.com/usa/the-player-of-games.html,
       | https://www.foliosociety.com/usa/consider-phlebas.html), gorgeous
       | editions, great paper, lovely bindings, great illustrations. I've
       | been eyeing them for a while and it's so nice to have good
       | friends who notice and are so generous.
        
       | hermitcrab wrote:
       | if you are a fan of Bank's culture books, consider reading his
       | first novel 'the wasp factory'. Very dark and funny, with a huge
       | twist at the end. NB/ Not sci-fi.
        
         | howard941 wrote:
         | Iain Banks' Horror, non-SF stuff is great. Like you I enjoyed
         | The Wasp Factory. Also, The Bridge. We lost him far too young.
         | 
         | edit: and if you enjoyed Banks' Horror you'll probably get into
         | Dan Simmons' stuff, another SF (Hyperion) and Horror writer.
         | The Song of Kali was excellent.
        
       | blackhaj7 wrote:
       | I love the culture series.
       | 
       | The worlds that Alastair Reynolds builds in the Revelation Space
       | series grips me the most though.
       | 
       | The conjoiners with their augmented, self healing, interstellar
       | travelling yet still a little human characteristics is both
       | believable but beyond the familiar all at the same time. Highly
       | recommended
        
         | ImaCake wrote:
         | I think Revelation Space does a great job creating a universe
         | that squeezes out novel human cultures through the crushing
         | vice of selection pressure. It's every bit as daring as The
         | Culture, just a different vibe!
        
       | robwwilliams wrote:
       | Thanks for the link to Banks' site. Great read.
       | 
       | Here is one suggestion that I think surpasses Banks in scope and
       | complexity, and yes, perhaps even with a whiff of optimism about
       | the future:
       | 
       | Hannu Rajaniemi's Jean Le Flambeur/Quantum Thief Trilogy (2010 to
       | 2014)
       | 
       | https://en.wikipedia.org/wiki/The_Quantum_Thief
       | 
       | https://www.goodreads.com/series/57134-jean-le-flambeur
       | 
       | He manages plots with both great intricacy and with more plot
       | integrity than Banks often manages. And he is much more of a
       | computer and physics geek too, so the ideas are even farther out.
       | 
       | Probably also an HN reader :-)
       | 
       | Also set in a comparative near future. The main "problem" with
       | the Quantum Thief trilogy is the steep learning curve--Rajaniemi
       | throws the reader in the deep end without a float. But I highly
       | recommend perservering!
        
         | jauntywundrkind wrote:
         | I really loved Quantum Thief, with it's Accelerando-like scope
         | & scale of expansion, but mixed with such
         | weird/eccentric/vibrant local mythos of the world, such
         | history.
         | 
         | Without my prompting it ended up in our local sci-fi/fantasy
         | book club's rotation a couple months back, and there was some
         | enjoyment but overall the major mood seemed to be pretty
         | befuddled and confused, somewhat hurt at some of the trauma/bad
         | (which isn't better in book 2!). But man, it worked so well for
         | me. As you say, a very deep end book. But there's so much fun
         | stuff packed in, such vibes; I loved puzzling it through the
         | first time, being there for the ride, and found only more to
         | dig into the next time.
         | 
         | Still feels very different from Banks, where "space hippies
         | with guns" has such a playful side. Quantum Thief is still a
         | story of a solar system, and pressures within it, but there's
         | such amorphous & vast extends covered by the Culture, so many
         | many things happening in that universe. The books get to take
         | us through Special Cirumstances, through such interesting edge
         | cases, where-as the polot of Quantum Thief orbits the existing
         | major powers of the universe.
        
       | throwaway13337 wrote:
       | In these topics, I don't see cyborgs come up much.
       | 
       | We're already kinda cyborgs. We use our tools as extensions of
       | ourselves. Certainly my phone and computer are becoming more and
       | more a part of me.
       | 
       | A chess playing AI and a human beat a chess playing AI alone.
       | 
       | The future I'd like to see is one where we stay in control but
       | make better decisions because of our mental enhancements.
       | 
       | With this logic, the most important thing now is not 'safe AI'
       | but tools which do not manipulate us. Tools should, as a human
       | right, be agents of the owners control alone in the same way that
       | a hand is.
       | 
       | AI isn't separate from us. It's part of us.
       | 
       | Seeing it as separate puts us on a darker path.
        
         | brcmthrowaway wrote:
         | Transhumanism is just rich people babble
        
       | golol wrote:
       | For me my favorite Culture novels are the ones which are just
       | vessels to deliver the perfect Deux Ex Machina - Player of Games,
       | Excession, Surface Detail etc.
        
       | hypertexthero wrote:
       | Is there a video game that is particularly complementary to one
       | of the Culture novels?
        
         | Vecr wrote:
         | It's probably not possible to really do it. You could play
         | something like Swat 4 while role-playing an one of the Special
         | Circumstances agents I guess? You can switch between the views
         | of each member of your team, use a fiber optic device to look
         | under doors, sometimes have snipers set up, and look through
         | their scopes/take shots.
         | 
         | Lots of what we see "on screen" (but obviously it's a book) is
         | really a somewhat detail-less alien action book dressed up as
         | 1990s Earth (that's why the characters are called humans, and
         | why certain things are described inaccurately).
         | 
         | A good entry in an action game series started in the 90s isn't
         | a bad bet. As I said, it's close to what good parts of the
         | books portray themselves as anyway.
        
       | squeedles wrote:
       | The level of discussion in this thread, both pro and con,
       | demonstrates that I have made a grave omission by never reading
       | any of this.
       | 
       | However, the article has one point that I viscerally reacted to:
       | 
       | "we have been, at this point, amply cautioned.
       | 
       | Vision, on the other hand: I can't get enough."
       | 
       | Amen.
        
       ___________________________________________________________________
       (page generated 2024-09-08 23:00 UTC)