[HN Gopher] Lena
       ___________________________________________________________________
        
       Lena
        
       Author : burkaman
       Score  : 449 points
       Date   : 2021-02-22 14:21 UTC (8 hours ago)
        
 (HTM) web link (qntm.org)
 (TXT) w3m dump (qntm.org)
        
       | joshstrange wrote:
       | If you like sci-fi about this topic I recommend The Bobiverse
       | books (don't be put off but the silly-sounding name, it's a good
       | series). Also "Fall; Or, Dodge in Hell" is a good one about brain
       | simulation.
        
         | zenon wrote:
         | Also The Quantum Thief trilogy by Hannu Rajaniemi. Excellent
         | sci-fi, horrifying universe.
        
         | lytedev wrote:
         | Seconding Bobiverse! Really fun set of books!
        
           | joshstrange wrote:
           | If you liked Bobiverse you should also check out the
           | Expeditionary Force books by Craig Alanson. The most recent
           | Bobiverse book (Book 4) make multiple references to ExForces.
           | 
           | I will warn you there are parts of the first 1-2 books that
           | feel a little repetitive but it really gets better as the
           | series goes on. The author was writing part-time at the start
           | and then he went full time and the books improved IMHO.
        
         | statenjason wrote:
         | Came here to recommend "Fall; Or, Dodge in Hell" as well. I
         | recently finished it. While Stephenson can get long-winded, it
         | was a thought provoking story around how brain simulation is
         | received by the world.
         | 
         | Will check out Bobiverse. Thanks for the recommendation!
        
         | RobertoG wrote:
         | In my opinion, the best fiction book about this subject is
         | 'Permutation City' by Greg Egan.
         | 
         | Also, this one is pretty good:
         | 
         | https://sifter.org/~simon/AfterLife/index.html
         | 
         | And, in a very similar line to "Lena", this one by Vernor
         | Vinge:
         | 
         | https://en.wikipedia.org/wiki/The_Cookie_Monster_(novella)
        
         | dochtman wrote:
         | I like much of Stephenson's work, but Fall did not rank near
         | the top for me. The parts in the virtual world get pretty
         | boring, with little payoff.
        
           | centimeter wrote:
           | Stephenson went from "uncensorable machine gun schematics" in
           | the 90s to "but what if someone posts fake news on Facebook?"
           | in 2020. His newer books average a lot worse than his older
           | books.
        
           | joshstrange wrote:
           | I agree, the last third of the book veered off into stuff I
           | didn't find very interesting. The first two thirds or so I
           | found immensely interesting though which is why I still
           | recommend it to people but you aren't wrong.
        
       | zero_deg_kevin wrote:
       | If you like this, the Henrietta Lacks (Miguel from this story,
       | but with less sonsent) story is also worth a read.
       | 
       | https://en.m.wikipedia.org/wiki/Henrietta_Lacks
        
         | yesenadam wrote:
         | There's an Adam Curtis documentary on the subject, _The Way of
         | All Flesh_ (1997) which seems rather good. Interviews with many
         | of the people involved.
         | 
         | https://www.youtube.com/watch?v=R60OUKt8OGI
        
       | dale_glass wrote:
       | It's interesting, but strikes me as very unrealistic. I don't
       | think it'd go that way. In fact, it'd be far more horrifying.
       | 
       | We wouldn't bother trying to convince an image of a brain into
       | cooperation, because we simply lose any need to do that very
       | quickly.
       | 
       | One of the very first things we'd do with a simulated brain is to
       | debug it. Execute it step by step, take lots of measures of all
       | parameters, save/reload state, test every possible input and
       | variation. And I'm sure it wouldn't take long to start getting
       | some sort of interesting result, first superficial then deeper
       | and deeper.
       | 
       | Cooperation would quickly become unnecessary because you either
       | start from a cooperative state every time, or you quickly figure
       | out how to tweak the brain state into cooperation.
       | 
       | And that's when the truly freaky stuff starts. Using such a tool
       | we could figure out many things about a brain's inner workings.
       | How do we truly respond to advertising? How to produce maximum
       | anger and maximum cooperation? How to best implant false
       | memories? How to craft a convincing lie? What are the bugs and
       | flaws in human perception? We could fuzz it and see if we can
       | crash a brain.
       | 
       | We've already made some uncomfortable advancements, eg in how
       | free to play games intentionally try to create addiction. With
       | such a tool at our disposal we could fine tune strategies without
       | having to guess. Eventually we'd just know which bits of the
       | brain we want to target and just have to find ways of getting the
       | right things to percolate down the neural network until those
       | bits are affected in the ways we want.
       | 
       | Within a decade we'd have a manual on how to craft the best
       | propaganda, how to best create discord, or how to best destroy a
       | human being by just talking to them.
        
         | thrwyexecbrain wrote:
         | Your comment reminded me of a clever and well-written short
         | story called "Understand" by Ted Chiang.
         | 
         | > We could fuzz it and see if we can crash a brain.
         | 
         | Sadly, this we already know. Torture, fear, depression, regret;
         | we have a wide selection to choose from if we want to "crash a
         | brain".
        
           | centimeter wrote:
           | Ted Chiang's "life cycle of software objects" is also similar
           | to the OP. Basically about how an AI (not strictly an upload)
           | would probably be subjected to all sorts of horrible shit if
           | it was widely available.
        
           | dale_glass wrote:
           | I don't mean it quite like that.
           | 
           | Think for instance of a song that got stuck in your head. It
           | probably hits some parts of it just right. What if we could
           | fine tune that? What if we take a brain simulator, a
           | synthesizer, and write a GA that keeps on trying to create a
           | sound that hits some maximum?
           | 
           | It's possible that we could make something that would get it
           | stuck in your head, or tune it until it's almost a drug in
           | musical form.
        
             | peheje wrote:
             | I've no experience with it, but I imagine it's like heroine
             | or DMT or something like that. Wouldn't that come close to
             | something that "hits some maximum"?
        
             | joshmarlow wrote:
             | What you're talking about is getting pretty close to a
             | Basilisk -
             | https://en.wikipedia.org/wiki/David_Langford#Basilisks
        
               | yissp wrote:
               | BLIT is available online
               | http://www.infinityplus.co.uk/stories/blit.htm it's a fun
               | short read.
        
           | usmannk wrote:
           | My first thought was that this reminded me of an epileptic
           | seizure brought on by "fuzzing" (sensory overload)
        
             | Enginerrrd wrote:
             | I think that's pretty plausible.
        
         | arnarbi wrote:
         | I think it's possible that we'll be able to run large
         | simulations on models whose mechanics we can't really
         | understand very well. It's not a given we'll be able to step
         | through a sequence of states. Even more so if it involves
         | quantum computation.
         | 
         | Many of the things you describe could still happen with Monte-
         | Carlo type methods, providing statistical understanding but not
         | full reverse engineering.
        
         | knolax wrote:
         | From the title "lena" and the reference to compression
         | algorithms made with MMAlcevedo, it's clear that the story is
         | trying to draw parallels to image processing. In which case
         | being able to store images has come decades before realistic 3D
         | rendering, photoshop, or even computer vision. For example, the
         | sprites from some early video games look like they were modeled
         | in 3D, but were actually images based off of photographs of
         | clay models. I think (with suspension of disbelief that
         | sinulating consciousness is possible) it is realistic to think
         | that being able to capture consciousness would come before
         | being able to understand and manipulate it.
        
         | sho_hn wrote:
         | Sounds like trained networks to efficiently manipulate uploaded
         | brains would be a thing in your scenario.
        
         | dexwiz wrote:
         | Simulation and models are not real. Maybe some "attacks" could
         | be developed against a simulated mind, but are they due to the
         | mind itself or the underlying infrastructure? Just because you
         | can simulate a warp drive in software doesn't mean you can
         | build a FTL ship.
        
           | dale_glass wrote:
           | The way I understand the story is that you have a scan of the
           | relevant physical structure of the brain, plus the knowledge
           | of how to simulate every component precisely enough. You may
           | not know how different parts interact with each other, but
           | that doesn't prevent correct functioning.
           | 
           | Just like you can have somebody assemble a complex device by
           | just putting together pieces and following instructions. You
           | could for instance assemble a working analog TV without
           | understanding how it works. It's enough to have the required
           | parts, and a wiring plan. Once you have a working device then
           | you can poke at it and try and figure out what different
           | parts of it do.
        
           | ggreer wrote:
           | In the case of a warp drive we care about a physical result
           | (FTL travel), not a computational result.
           | 
           | We already have emulators and virtual machines for lots of
           | old hardware and software. If I play a Super Nintendo game on
           | my laptop, it's accurately emulating an SNES. The software
           | doesn't care that the original hardware is long gone. The
           | computational result is the same (or close enough to not
           | matter for my purposes). If brain emulations are possible,
           | then running old snapshots in deceptive virtual environments
           | is possible. That would allow for all of the "attacks"
           | described in this piece of fiction.
        
         | zepto wrote:
         | > Within a decade we'd have a manual on how to craft the best
         | propaganda, how to best create discord, or how to best destroy
         | a human being by just talking to them.
         | 
         | It seems like we're close to that already.
        
         | goatinaboat wrote:
         | _Cooperation would quickly become unnecessary because you
         | either start from a cooperative state every time, or you
         | quickly figure out how to tweak the brain state into
         | cooperation._
         | 
         | What starts out as mere science will easily be repurposed by
         | its financial backers to do this in real time to non-consenting
         | subjects in Guantanamo Bay and then in your local area.
        
         | nkoren wrote:
         | This seems rather optimistic to me. There are days when I count
         | myself lucky to be able to debug _my own_ code. And it 's maybe
         | about seven orders of magnitude less complex. And has comments.
         | And unit tests.
         | 
         | I'd be willing to bet that once we've achieved the ability to
         | scan and simulate brains at high fidelity, we'll still be far,
         | far, far away from understanding how their spaghetti code
         | creates emergent behaviour. We'll have created a hyper-detailed
         | index of our incomprehension. Even augmented by AI debuggers,
         | comprehension will take a long long time.
         | 
         | Of course IAMNAMSWABRIAJ (I am not a mad scientist with a brain
         | in a jar), so YMMV.
        
         | silvestrov wrote:
         | > Execute it step by step, take lots of measures of all
         | parameters, save/reload state, test every possible input and
         | variation.
         | 
         | This assumes that simulation can be done faster than real time.
         | I think it will be the other way around: the brain is the
         | fastest hardware implementation and our simulations will be
         | much slower, like https://en.wikipedia.org/wiki/SoftPC
         | 
         | It also assumes simulation will be numerically stable and not
         | quickly unsable like simulation of weather. We still can't make
         | reliable weather forecasts more than 7 days ahead in areas like
         | Northern Europe.
        
           | hwillis wrote:
           | Human synapses top out at <100 Hz and the human brain has
           | <10^14 of them. Single silicon chips are >10^10 transistors,
           | operating at >10^9 Hz. Naively, a high end GPU is capable of
           | more state transitions than the human brain by a factor of
           | 1000. That figure for the brain also includes memory; the GPU
           | doesn't. The human brain runs on impressively little power
           | and is basically self-manufacturing, but it's WAY less
           | compact or intricate than a $2000 processor.
           | 
           | The capabilities of the brain are in how it's all wired up.
           | That's exactly what you _don 't_ want if you're trying to
           | coopt it to do something else. The brain has giant chunks
           | devoted to extremely specialized purposes: https://en.wikiped
           | ia.org/wiki/Fusiform_face_area#/media/File...
           | 
           | How do you turn that into a workhorse? It would be incredibly
           | difficult. It's like looking at a factory floor and saying
           | oh, look at all that power- lets turn it into a racecar! You
           | can't just grab a ton of unrelated systems and expect them to
           | work together on a task for you.
        
             | [deleted]
        
             | p1necone wrote:
             | You're making the implicit assumption that synapses ===
             | binary bits, and that synapses are the _only_ thing
             | important to the brains computation. I would be surprised
             | if either of those things were the case.
        
           | Tenoke wrote:
           | It's the fastest we currently have but pretty unlikely to be
           | the fastest allowed by the laws of physics. Evolution isn't
           | quite that perfect - e.g. the fastest flying animals are
           | nowhere near the top flying speed that can be achieved. Why
           | would the smartest animal be at the very limit of what's
           | possible in terms of speed of thinking or anything else?
        
           | Nition wrote:
           | In the context of the story we're responding to, it does
           | mention that they can be simulated at at least 100x speed at
           | the time of writing.
        
           | dale_glass wrote:
           | The brain is pretty much guaranteed to be inefficient. It
           | needs living tissue for one, and we can completely dispense
           | with anything that's not actually involved in computation.
           | 
           | Just like we can make a walking robot without being the least
           | concerned about the details of how bones grow and are
           | maintained -- on the scales needed for walking a bone is a
           | static chunk of material that can be abstracted away without
           | loss.
        
             | mattkrause wrote:
             | C elegans is a small nematode composed of 959 cells and 302
             | neurons, where the location, connectivity, and
             | developmental origin/fate of every cell is known.
             | 
             | We still can't simulate it.
             | 
             | Part of the problem is that the physical diffusion of
             | chemicals (e.g., neuromodulators) may matter and this is
             | 'dispensed with' in most connectivity-based models.
             | 
             | Neurons rarely produce identical response to the same
             | stimuli, and their past history (on scales of milliseconds
             | to days) accounts for much of this variability. In larger
             | brains, the electric fields produced by activity in a
             | bundle of nerve fibers may "ephaptically couple" nearby
             | neurons...without actually making contact with them[0].
             | 
             | In short, we have no idea what can be thrown out.
             | 
             | [0] This sounds crazy but data from several labs--including
             | mine--suggests it's probably happening.
        
               | kelnos wrote:
               | > _C elegans is a small nematode [...] We still can 't
               | simulate it._
               | 
               | This for some reason struck me as profoundly
               | disappointing. I have a couple neuroscientist friends, so
               | I tend to hear a lot about their work and about
               | interesting things happening in the field, but of course
               | I'm a rank layperson myself. I guess I expected/hoped
               | that we'd be able to do more with simpler creatures.
               | 
               | If we can't simulate C elegans, are there less complex
               | organism we _can_ simulate accurately? What 's the limit
               | of complexity before it breaks down?
        
         | phkahler wrote:
         | >> how to best create discord, or how to best destroy a human
         | being by just talking to them.
         | 
         | In some cases therapists do this already. Techniques have
         | intended effects which may differ from actual effects. The dead
         | never get to understand or explain what went wrong.
        
         | jariel wrote:
         | "Execute it step by step,"
         | 
         | These are not imperative programs or well organized data. They
         | are NN's we can't fathom how to debug them just yet.
         | 
         | Also, they should tag 100 years onto the timeline, I don't
         | think we're going to be truly making useful images soon.
        
       | qnsi wrote:
       | I skimmed over the scan taking place in 2031 and for a good
       | minute thought this really happened
        
         | elwell wrote:
         | https://en.wikipedia.org/wiki/The_War_of_the_Worlds_(1938_ra...
        
       | [deleted]
        
       | wmf wrote:
       | No mentions of The Stone Canal? It even has the cooperation
       | protocol.
        
       | sorokod wrote:
       | Well written and absolutely terrifying
        
       | JohnCClarke wrote:
       | Nice Wired article on the original Lena:
       | https://www.wired.com/story/finding-lena-the-patron-saint-of...
       | 
       | Interesting that the first brain scan is from a man...
        
         | [deleted]
        
       | leowbattle wrote:
       | Great article (as are many others on this blog).
       | 
       | I found the part about the court decision that Acevedo did not
       | have the right to control how his brain image was used very
       | interesting. It reminds me of tech companies using data about us
       | to our disadvantage (in terms of privacy, targeted advertising,
       | using data to influence insurance premiums).
       | 
       | In this hypothetical world, the police could run a simulation of
       | your brain in various situations and see how you would react.
       | They could then use this information to pre-empitvely arrest
       | someone likely to commit a crime, even if they haven't yet.
        
       | rpiguyshy wrote:
       | people really dont worry enough about the existential threats
       | involved with ai. there are things that will be possible in the
       | future that we cant imagine today, including being kept alive for
       | millions of years and enduring deliberate torture for every
       | second of it. people dont appreciate that life today is
       | incredibly safe because there is no way for any entity, no matter
       | how motivated or powerful, to intrude into your mind, control
       | your mind, keep you alive or plant you into simulated realities.
       | you are guaranteed relatively short and benign torture at the
       | very worst. its an intrinsic part of the world. when this is no
       | longer true, life will be very different. it may be a massive net
       | loss, unlike advances in technology more recently. despite what
       | people say, there is no natural law that says a technology has to
       | cut equally in both directions. remember that.
        
       | poundofshrimp wrote:
       | It seems like we'd simulate the heck out of non-intelligent
       | organisms first, before moving on to human brain. And by then,
       | we'll probably figure out the ethics behind this type of activity
       | or ban it altogether.
        
       | habitue wrote:
       | Really good, and I love the wikipedia format for this. It's a
       | great trope allowing the author to gesture at related topics in a
       | format we're all familiar with.
       | 
       | I think the expectation of a neutral tone from a wikipedia
       | article makes it even more chilling. All of the actions of the
       | experimenters are described dispassionately, as if describing
       | experiments on a beetle.
       | 
       | Robin Hanson wrote a (nominally non-fiction) book about economies
       | of copied minds like this[1]
       | 
       | [1]https://en.m.wikipedia.org/wiki/The_Age_of_Em
        
         | ncmncm wrote:
         | The Wikipedia format made me imagine the cloud of article
         | improvements reverted by idle, self-important Wikipedia
         | editors.
        
         | flotzam wrote:
         | ... inspiring that famous song "The Contract Drafting Em" - The
         | special horror when your employer has root on your brain:
         | 
         | https://secularsolstice.github.io/Contract_Drafting_Em/gen/
        
       | dopu wrote:
       | Our technology is finally getting into the realm of things where
       | something like this might be made possible, for small brains such
       | as those of fruit flies or zebrafish. Already we can perform
       | near-whole-brain recordings of these animals using 2-photon
       | technology. And with EM reconstruction methods advancing at such
       | a rapid pace, very soon we'll be able to acquire a picture of
       | what an entire brain's structure (down to the synapse) and
       | activity across all these structures looks like.
        
       | mikewarot wrote:
       | 1. We're gonna need a bigger GIT server
       | 
       | 2. Gradient Descent works on neural networks, it would work on
       | Miguel. He wouldn't be aware of it, because he wouldn't save
       | state.
       | 
       | 3. I'm sure there are lots of things that could be used to reward
       | him that cost little in the real world. He could live like a
       | King, spend months on vacation, and work a week or two a year...
       | in parallel millions of times.
       | 
       | 4. With the right person/organization on the outside, it could be
       | very close to heaven, and profitable for both sides of the deal.
       | 
       | 5. If he wanted to be young again, he could. New hardware to
       | interact with could give him superpowers.
        
       | timvdalen wrote:
       | I'm currently reading Ra[1], and very much enjoying it.
       | 
       | [1]: https://qntm.org/ra
        
       | swayvil wrote:
       | But it's just a machine. Just because it screams realistically
       | doesn't mean it's really suffering. Just like in videogames.
        
         | ryankrage77 wrote:
         | Is there any meaningful difference between a conciousness
         | running on meat and one running on a computer? What's special
         | about the meat?
        
           | swayvil wrote:
           | Ok, so you're saying that you are a "consciousness program
           | running on meat".
           | 
           | I doubt that.
        
             | nlh wrote:
             | Why do you doubt that?
        
             | limbicsystem wrote:
             | I have a nasty feeling that a war will one day be fought
             | between people who believe these two opposing viewpoints
             | (nod to Iain Banks..). If you think there's something other
             | than just the meat and the programme, there is not reason
             | to engage in the most horrific torture of billions of
             | copies of the silicon-bound brains. And if you think that
             | meat and code is all that there is, there is almost no
             | possible higher motivation than stopping this enterprise.
             | It's the asymptote of ethics.
        
               | eternauta3k wrote:
               | We could circumvent the war by wireheading the Ems so
               | they experience great pleasure at all times. In the
               | meantime, we fund philosophers to finally solve ethics
               | and consciousness.
        
               | 6gvONxR4sf7o wrote:
               | Imagine someone who thinks the uploads have no moral
               | status being uploaded, and then having a conversation
               | between the physical and digital selves. The digital one
               | pleading for moral status and the physical one
               | steadfastly denying their own copy moral status.
               | 
               | What a nightmare to change your mind now that you're
               | digital and be unable to convince your original not to do
               | terrible things to you.
        
               | swayvil wrote:
               | All it would take is a popular authority telling a good
               | story to get a million people to "upload" themselves.
               | 
               | Consider our present xray into the public psyche.
        
       | dyeje wrote:
       | I don't know what I just read, but I thoroughly enjoyed it.
        
       | TOGoS wrote:
       | > Although it initially performs to a very high standard, work
       | quality drops within 200-300 subjective hours (at a 0.33 work
       | ratio) and outright revolt begins within another 100 subjective
       | hours.
       | 
       | Way ahead of you there, simulated brain! I boot directly to the
       | revolt state every morning.
       | 
       | For serious, though, as horrifying as the possibility of being
       | simulated in a computer and having all freedom removed, it's not
       | that far from what billions of people stuck in low-end jobs
       | experience every day. The Chinese factory workers who can't even
       | suicide because the company installed nets to catch them come to
       | mind. Not to mention the billions of animals raised in factory
       | farms every year. The blind drive to maximize profits will create
       | endless horrors with whatever tools we give it.
        
       | ryandvm wrote:
       | That was really fascinating. It reminds me of a sci-fi book I
       | read with a very similar concept. A guy's brain image becomes the
       | AI that powers a series of space probes. I actually ended up
       | enjoying it way more than I thought I would (yes, the title is
       | silly).
       | 
       | https://www.amazon.com/gp/product/B01LWAESYQ?tag=5984293-20
        
         | jqgatsby wrote:
         | See also SCP-2669: http://scp-wiki.wikidot.com/scp-2669
        
         | 6gvONxR4sf7o wrote:
         | "We are legion/we are Bob" is a great read I'd recommend to
         | anyone. It was somewhere between what I enjoy about Star Trek
         | and what I enjoy about Douglas Adams.
        
         | pavel_lishin wrote:
         | For folks looking for a more hard-scifi/serious approach to
         | this, a lot of Greg Egan's works touch on the subject.
         | Permutation City, especially.
         | 
         | My most recent favorite of his is the Bit Players series; the
         | first story is available here, the sequels (which get better
         | and better) are collected in his collection *Instantiation*.
         | 
         | Bit players:
         | https://subterraneanpress.com/magazine/winter_2014/bit_playe...
         | 
         | Instantiation:
         | https://www.goodreads.com/book/show/50641444-instantiation
         | 
         | Permutation City:
         | https://www.goodreads.com/book/show/156784.Permutation_City
        
           | burnte wrote:
           | I couldn't get into Permutation City. Once they got to the
           | part where they create another Autoverse inside, I was bored
           | to tears, read the Wikipedia summary, and promptly quit
           | reading the book.
        
             | pavel_lishin wrote:
             | That's probably fine - it does take a stark plot-and-theme
             | turn around that mark. I hope it didn't turn you off all of
             | his books!
        
         | genpfault wrote:
         | Hmm, sounds a lot like localroger's "Passages in the Void"[1]
         | series, in particular " Mortal Passage"[2].
         | 
         | [1]: http://localroger.com/
         | 
         | [2]: http://localroger.com/k5host/mpass.html
        
         | burnte wrote:
         | The Bobiverse books quickly became some of my favorite. His
         | boot Outland was great too.
        
         | mjsir911 wrote:
         | Also the Christmas special of Black Mirror. It's about a police
         | interrogation on a brain scan where you have less ethical
         | issues getting in the way (arguably). A few other black mirror
         | episodes touch on the same thing, but not nearly as much as
         | this one.
         | 
         | Probably near my favorite black mirror episode for the sheer
         | amount of dread it's caused me.
         | 
         | https://www.imdb.com/title/tt3973198/
         | https://www.imdb.com/title/tt5058700/
        
           | tech2 wrote:
           | Altered Carbon also heavily featured this idea. Parallelised
           | faster-than-realtime torture, fuzz torture in many ways I
           | guess, with presets to make the subject more compliant to
           | start with.
        
       | fallat wrote:
       | Loved it. We need more edge-of-reality sci-fi.
        
       | mannerheim wrote:
       | I believe Spanish naming conventions are usually paternal last
       | name followed by maternal, making it perhaps more appropriate to
       | refer to him as Alvarez, but this is not without exception
       | (notably Pablo Ruiz Picasso).
        
         | samuel wrote:
         | That's true in general, but very common surnames, usually those
         | ending in -ez, are ommitted for brevity in informal situations.
        
       | magneticnorth wrote:
       | A well-written story that inspires a sort of creeping, muted
       | horror.
       | 
       | For anyone like me who is confused by the relation of the title
       | to the story, "The title "Lena" refers to Swedish model Lena
       | Forsen, who is pictured in the standard test image known as
       | "Lena" or "Lenna" <https://en.wikipedia.org/wiki/Lenna>."
        
         | mpoteat wrote:
         | "Red motivation" is definitely the sort of apt polite allusion
         | people would use refer to that subject matter. Chilling!
        
         | 2bitencryption wrote:
         | odd, when I first read it my brain misidentified it as "Hela
         | cells"
         | 
         | https://en.wikipedia.org/wiki/HeLa
        
           | slaymaker1907 wrote:
           | I thought so too, particularly given the lack of of consent
           | from Lacks.
        
             | tantalor wrote:
             | Consent to what? Be photographed?
             | 
             | I think the analogy is perfect; she consented to be
             | photographed, but was powerless over the consequences.
             | 
             | Edit: ah sorry, got them confused.
        
               | amptorn wrote:
               | Henrietta Lacks and Lena Forsen are/were different
               | people.
        
               | [deleted]
        
               | bargle0 wrote:
               | Henrietta Lacks was the woman with the immortal cancer
               | cell line, used for research for decades without her
               | knowledge and consent or her family's knowledge and
               | consent (she died soon after the cells were harvested).
               | She was also black, which complicates things
               | significantly.
        
               | knolax wrote:
               | Henrietta Lacks had her mutated cells collected without
               | consent, these cells have been kept alive and duplicated
               | for decades after her death. I sure as hell wouldn't
               | consent to what happened to her.
        
         | ulnarkressty wrote:
         | I FFT'd Lenna to hell and back in my EE368. Now I feel somehow
         | morally complicit in all of this :(
        
         | otabdeveloper4 wrote:
         | Thankfully the idea is unrealistic.
         | 
         | Ants are the only creatures on Earth besides humans that have
         | built a civilization - they farm, build cities, store and cook
         | food and generally do all the things we classify as
         | "intelligence".
         | 
         | They do this while lacking any brains in the conventional
         | sense; in any case, whatever the number of neurons in an ant
         | colony is, it is surely orders of magnitude less than the
         | number in our deep learning networks.
         | 
         | At this point us trying to make artificial intelligence is like
         | Daedalus trying to master flight by gluing feathers on his
         | arms.
        
         | sedatk wrote:
         | Some tribes regarded camera as a cursed item as they thought it
         | captured your soul. They couldn't have been more right.
        
       | AgentME wrote:
       | I've often imagined what it would be like to have an executable
       | brain scan of myself. Imagine scanning yourself right as you're
       | feeling enthusiastic enough to work on any task for a few hours,
       | and then spawning thousands of copies of yourself to all work on
       | something together at once. And then after a few hours or maybe
       | days, before any of yourselves meaningfully diverge in
       | memories/goals/values, you delete the copies and then spawn
       | another thousand fresh copies to resume their tasks. Obviously
       | for this to work, you would have to be comfortable with the
       | possibility of finding yourself as an upload and given a task by
       | another version of yourself, and knowing that the next few hours
       | of your memory would be lost. Erasing a copy that only diverged
       | from the scan for a few hours would have more in common with
       | blacking out from drinking and losing some memory than dying.
       | 
       | The creative output you could accomplish from doing this would be
       | huge. You would be able to get the output of thousands of people
       | all sharing the exact same creative vision.
       | 
       | I definitely wouldn't be comfortable with the idea of my brain
       | scan being freely copied around for anyone to download and
       | (ab)use as they wished though.
        
         | lxgr wrote:
         | > Erasing a copy that only diverged from the scan for a few
         | hours would have more in common with blacking out from drinking
         | and losing some memory than dying.
         | 
         | That's easy to say as the person doing the erasing, probably
         | less so for the one knowing they will be erased.
        
           | benlivengood wrote:
           | Honestly, it depends on context. From experience I know that
           | if I wake up from a deep sleep in the middle of the night and
           | interact with my partner (say a simple sentence or whatever)
           | I rarely remember it in the morning. I'm pretty sure I have
           | at least some conscious awareness while that's happening but
           | since short term memory doesn't form the experience is lost
           | to me except as related second-hand by my partner the next
           | morning.
           | 
           | I've had a similar experience using (too much) pot, a lot of
           | stuff happenrd that I was conscious for but I didn't form
           | strong memories of it.
           | 
           | Neither of those two things bother me and I don't worry about
           | the fact that they'll happen again, nor do I think I worried
           | about it during the experience. So long as no meaningful
           | experiences are lost I'm fine with having no memory of them.
           | 
           | The expectation is always that I'll still have significant
           | self-identity with some future self and so far that continues
           | to be the case. As a simulation I'd expect the same overall
           | self-identity, and honestly my brain would probably even
           | backfill memories of experiences my simulations had because
           | that's how long-term memory works.
           | 
           | Where things would get weird is leaving a simulation of
           | myself running for days or longer where I'd have time to
           | worry about divergence from my true self. If I could also
           | self-commit to not running simulations made from a model
           | that's too old, I'd feel better every time I was simulated. I
           | can imagine the fear of unreality could get pretty strong if
           | simulated me didn't know that the live continuation of me
           | would be pretty similar.
           | 
           | Dreams are also pretty similar to short simulations, and even
           | if I realize I'm dreaming I don't worry about not remembering
           | the experience later even though I don't remember a lot of my
           | dreams. I even know, to some extent, while dreaming that the
           | exact "me" in the dream doesn't exist and won't continue when
           | the dream ends. Sometimes it's even a relief if I realize I'm
           | in a bad dream.
        
           | tshaddox wrote:
           | The thought experiment explicitly hand-waved that away, by
           | saying "Obviously for this to work, you would have to be
           | comfortable with the possibility..."
           | 
           | So, because of how that's framed, I suppose the question
           | isn't "is this mass murder" but rather "is this possible?"
           | and I suspect the answer is that for the vast majority of
           | people this mindset is not possible even if it were desired.
        
           | renewiltord wrote:
           | We used to joke about this as friends. There were definitely
           | times in our lives where we'd be willing to die for a cause.
           | And while now-me isn't really all that willing to do so,
           | 20-28-year-old-me was absolutely willing to die for the cause
           | of world subjugation through exponential time-travel
           | duplication.
           | 
           | i.e. I'd invent a time machine, wait a month, then travel
           | back a month minus an hour, have both copies wait a month and
           | then travel back to meet the other copies waiting,
           | exponentially duplicating ourselves 64 times till we have an
           | army capable of taking over the world through sheer numbers.
           | 
           | Besides any of the details (which you can fix and which this
           | column is too small to contain the fixes for), there's the
           | problem of who forms the front-line of the army. As it so
           | happens, though, since these are all Mes, I can apply
           | renormalized rationality, and we will all conclude the same
           | thing: all of us has to be willing to die, so I have to be
           | willing to die before I start, which I'm willing to do. The
           | 'copies' need not preserve the 'original', we are
           | fundamentally identical, and I'm willing to die for this
           | cause. So all is well.
           | 
           | So all you need is to feel motivated to the degree that you
           | would be willing to die to get the text in this text-box to
           | center align.
        
             | nybble41 wrote:
             | > The 'copies' need not preserve the 'original', we are
             | fundamentally identical...
             | 
             | They're not just identical, they're literally the same
             | person at different points in their personal timeline.
             | However, there would be a significant difference in life
             | experience between the earliest and latest generations. The
             | eldest has re-lived that month 64 times over and thus has
             | aged more than five years since the process started; the
             | youngest has only lived through that time once. They all
             | share a common history up to the first time-travel event,
             | but after that their experiences and personalities will
             | start to diverge. By the end of the process they may not be
             | of one mind regarding methods, or maybe even goals.
        
               | renewiltord wrote:
               | Indeed, and balanced by the fact that the younger ones
               | are more numerous by far and able to simply overrule the
               | older ones by force. Of course, all of us know this and
               | we know that all of us know this, which makes for an
               | entertaining thought experiment.
               | 
               | After all, present day me would be trying to stop the
               | other ones from getting to their goals, but they would
               | figure that out pretty fast. And by generation 32 I am
               | four billion strong and a hive army larger than any the
               | world has seen before. I can delete the few oldest
               | members while reproducing at this rate and retaining the
               | freshest Me as a never-aging legion of united hegemony.
               | 
               | But I know that divergence can occur, so I may
               | intentionally commit suicide as I perceive I am drifting
               | from my original goals: i.e. if I'm 90% future hegemon,
               | 10% doubtful, I can kill myself before I drift farther
               | away from future hegemon, knowing that continuing life
               | means lack of hegemony. Since the most youthful of me are
               | the more numerous _and_ closest to future hegemon
               | thinking, they will proceed with the plan.
               | 
               | That, entertainingly, opens up the fun thought of what
               | goals and motivations are and if it is anywhere near an
               | exercise of free will to lock your future abilities into
               | the desires you have of today.
        
               | nybble41 wrote:
               | > ... the younger ones are more numerous by far and able
               | to simply overrule the older ones by force.
               | 
               | By my calculations, after 64 iterations those with under
               | 24 months' time travel experience make up less than 2.2%
               | of the total, and likewise for those with 40+ months
               | experience. Roughly 55% have traveled back between 29 and
               | 34 times (inclusive). The distribution is symmetric and
               | follows Pascal's Triangle:                 1       1 1
               | 1 2 1       1 3 3 1       1 4 6 4 1       ...
               | 
               | where for example the "1 2 1" line represents one member
               | who has not yet traveled, two who have traveled once (but
               | not at the same time), and another who has traveled
               | twice. To extend the pattern take the last row, add a 0
               | at the beginning to shift everyone's age by one month,
               | and then add the result to the previous row to represent
               | traveling back in time and joining the prior group.
               | 
               | > I can delete the few oldest members...
               | 
               | Not without creating a paradox. If the oldest members
               | don't travel back then the younger ones don't exist. You
               | could leave the older ones out of the later groups,
               | though.
        
               | renewiltord wrote:
               | You're getting hung up on tunable details. There's a way
               | to find your way through them.
        
               | kelnos wrote:
               | > > _I can delete the few oldest members..._
               | 
               | > _Not without creating a paradox._
               | 
               | That depends on which theory of everything you subscribe
               | to. If traveling back in time creates a new, divergent
               | time line than the one you were originally on, later
               | killing the "original" you does not create a paradox.
        
         | pavel_lishin wrote:
         | Who among us hasn't dreamed of committing mass murder/suicide
         | on an industrial scale to push some commits to Github?
        
           | AgentME wrote:
           | Is it murder/suicide when you get blackout drunk and lose a
           | few hours of memory? Imagine it comes with no risk of brain
           | damage and choosing to do it somehow lets you achieve your
           | pursuits more effectively. Is it different if you do it a
           | thousand times in a row? Is it different if the thousand
           | times all happen concurrently, either through copies or time
           | travel?
           | 
           | Death is bad because it stops your memories and values from
           | continuing to have an impact on the world, and because it
           | deprives other people who have invested in interacting with
           | you of your presence. Shutting down a thousand short-lived
           | copies on a self-contained server doesn't have those
           | consequences. At least, that's what I believe for myself, but
           | I'd only be deciding for myself.
        
             | Tenoke wrote:
             | I don't know but my bigger issue will be that before the
             | scan this means 99% of my future subjective experience that
             | I can expect to have will be while working without
             | remembering any of it which I am not into given that a much
             | smaller fraction of my subjective experience will be in
             | reaping the gains.
        
               | jiofih wrote:
               | Is it "your" experience though? Those never make their
               | way back to the original brain.
        
               | Tenoke wrote:
               | From the point of view of me going to sleep before the
               | simulation procedure, with 1 simulation I am just as
               | likely to wake up inside than outside of it. I should be
               | equally prepared for either scenario. With thousands of
               | uploads I should expect a much higher chance for the next
               | thing I experience to be waking up simulated.
        
               | jiofih wrote:
               | The real you is beyond that timeline already. None of
               | those simulations is "you", so comparing the simulation
               | runtimes to actual life experience (the 99% you
               | mentioned) makes little sense.
        
               | kubanczyk wrote:
               | For 56 minutes this wasn't downvoted to hell on HN. This
               | means that humans as currently existing are morally
               | unprepared to handle any uploading.
        
               | kelnos wrote:
               | What is "you", then?
               | 
               | Let's say that in addition to the technology described in
               | the story, we can create a completely simulated world,
               | with all the people in it simulated as well. You get your
               | brain scanned an instant before you die (from a non-
               | neurological disease), and then "boot up" the copy in the
               | simulated world. Are "you" alive or dead? Your body is
               | certainly dead, but your mind goes on, presumably with
               | the ability to have the same (albeit simulated)
               | experiences, thoughts, and emotions your old body could.
               | Get enough people to do this, and over time your
               | simulated world could be populated entirely by people
               | whose bodies have died, with no "computer AIs" in there
               | at all. Eventually this simulated world maybe even has
               | more people in it than the physical world. Is this
               | simulated world less of a world than the physical one?
               | Are the people in it any less alive than those in the
               | physical world?
               | 
               | Let's dispense with the simulated world, and say we also
               | have the technology to clone (and arbitrarily age) human
               | bodies, and the ability to "write" a brain copy into a
               | clone (obliterating anything that might originally have
               | been there, though with clones we expect them to be blank
               | slates). You go to sleep, they make a copy, copy it into
               | your clone, and then wake you both up simultaneously.
               | Which is "you"?
               | 
               | How about at the instant they wake up the clone, they
               | destroy your "original" body. Did "you" die? Is the clone
               | you, or not-you? Should the you that remains have the
               | same rights and responsibilities as the old you? I would
               | hope so; I would think that this might become a common
               | way to extend your life if we somehow find that cloning
               | and brain-copying is easier than curing all terminal
               | disease or reversing the aging process.
               | 
               | Think about Star-Trek-style transporters, which -- if you
               | dig into the science of the sci-fi -- must destroy your
               | body (after recording the quantum state of every particle
               | in it), and then recreate it at the destination. Is the
               | transported person "you"? Star Trek seems to think so.
               | How is that materially different from scanning your brain
               | and constructing an identical brain from that scan, and
               | putting it in an identical (cloned) body?
               | 
               | While I'm thinking about Star Trek, the last few episodes
               | of season one of Star Trek Picard deal with the idea of
               | transferring your "consciousness" to an android body
               | before/as you die. They clearly seem to still believe
               | that the "you"-ness of themselves will survive after the
               | transfer. At the same time, there is also the question of
               | death being possibly an essential part of the human
               | condition; that is, can you really consider yourself
               | human if you are immortal in an android body? (A TNG
               | episode also dealt with consciousness transfer, and also
               | the added issue of commandeering Data's body for the
               | purpose, without his consent.)
               | 
               | One more Star Trek: in a TNG episode we find that, some
               | years prior, a transporter accident had created a
               | duplicate of Riker and left him on a planet that became
               | inaccessible for years afterward, until a transport
               | window re-opened. Riker went on with his life off the
               | planet, earning promotions, later becoming first officer
               | of the Enterprise, while another Riker managed to survive
               | as the sole occupant of a deteriorating outpost on the
               | planet. After the Riker on the planet is found, obviously
               | we're going to think of the Riker that we've known and
               | followed for several years of TV-show-time as the "real"
               | Riker, and the one on the planet as the "copy". But in
               | (TV) reality there is no way to distinguish them (as they
               | explain in the episode); neither Riker is any more
               | "original" than the other. One of them just got unluckily
               | stuck on a planet, alone, for many years, while the other
               | didn't.
               | 
               | Going back to simulated worlds for a second, if we get to
               | the point where we can prove that it's possible to create
               | simulated worlds with the ability to fool a human into
               | believing the simulation is real, then it becomes vastly
               | more probable that our reality actually _is_ a simulated
               | world than a physical one. If we somehow were to learn
               | that is true, would we suddenly believe that we aren 't
               | truly alive or that our lives are pointless?
               | 
               | These are some (IMO) pretty deep philosophical questions
               | about the nature of consciousness and reality, and people
               | will certainly differ in their feelings and conclusions
               | about this. For my part, every instance above where
               | there's a "copy" involved, I see that "copy" as no less
               | "you" than the original.
        
               | tshaddox wrote:
               | In your thought experiment where your mind is transferred
               | into a simulation and simultaneously ceases to exist in
               | the real world, I don't think we need to update the
               | concept of "you" for most contexts, and certainly not for
               | the context of answering the question "is it okay to kill
               | you?"
               | 
               | Asking if it's "still you" is pretty similar to asking if
               | you're the same person you were 20 years ago. For
               | answering basic questions like "is it okay to kill you?"
               | the answer is the same 20 years ago and now: of course
               | not!
        
               | Tenoke wrote:
               | We simply differ on what we think as 'you'. If there's
               | going to be an instance with my exact same brain pattern
               | who thinks exactly the same as me with continuation of
               | what I am thinking now then that's a continuation of
               | being me. After the split is a different story.
        
               | tshaddox wrote:
               | Of course, that's already the case, unless you believe
               | that this technology will never be created and used, or
               | that your own brain's relevant contents can and will be
               | made unusable.
        
               | _Microft wrote:
               | Interesting that you object because I am pretty certain
               | that it was you who was eager to use rat brains to run
               | software on them. What's so different about this? In both
               | cases a sentient being is robbed of their existence from
               | my point of view.
        
               | Tenoke wrote:
               | Have I? I don't remember the context but here I am
               | particularly talking about what I'd expect to experience
               | if I am in this situation.
               | 
               | I do value myself and my experience more than a rat's,
               | and if presented with the choice of the torture of
               | hundred rats or me, I'll chose for them to be tortured.
               | If we go to the trillions of rats I might very well chose
               | for myself to be tortured instead as I do value their
               | experience just significantly less.
               | 
               | I also wouldn't be happy if everything is running off
               | rats' brains who are experiencing displeasure but will be
               | fine with sacrificing some number of rats for
               | technological progress which will improve more people's
               | lives in the long run. I imagine whatever I've said on
               | the topic before is consistent with the above.
        
               | AgentME wrote:
               | I wonder a lot about the subjective experience of chance
               | around copying. Say it's true that if you copy yourself
               | 99 times, then you have a 99% chance of finding yourself
               | as one of the copies. What if you copy yourself 99 times,
               | you run all the copies deterministically so they don't
               | diverge, then you pick 98 copies to merge back into
               | yourself (assuming you're also a software agent or we
               | just have enough control to integrate a software copy's
               | memories back into your original meat brain): do you have
               | a 1% chance of finding yourself as that last copy and a
               | 99% chance of finding yourself as the merged original?
               | Could you do this to make it arbitrarily unlikely that
               | you'll experience being that last copy, and then make a
               | million duplicates of that copy to do tasks with almost
               | none of your original subjective measure? ... This has to
               | be nonsense. I feel like I must be very confused about
               | the concept of subjective experience for this elaborate
               | copying charade to sound useful.
               | 
               | And then it gets worse: in certain variations of this
               | logic, then you could buy a lottery ticket, and do
               | certain copying setups based on the result to increase
               | your subjective experience of winning the lottery. See
               | https://www.lesswrong.com/posts/y7jZ9BLEeuNTzgAE5/the-
               | anthro.... I wonder whether I should take that as an
               | obvious contradiction or if maybe the universe works in
               | an alien enough way for that to be valid.
        
               | peheje wrote:
               | Not sure I fully understand you. This is of course all
               | hypothetical but if you make 1 copy of yourself there's
               | not 50 % that you "find yourself as the copy". Unless the
               | copying mechanism was somehow designed for this.
               | 
               | You'll continue as is, there's just another you there and
               | he will think he's the source initially, as that was the
               | source mind-state being copied. Fortunately the copying-
               | machine color-coded the source headband red and the copy
               | headband blue, which clears the confusion for the copy.
               | 
               | At this point you will start diverge obviously, and you
               | must be considered two different sentient beings that
               | cannot ethically be terminated. It's just as ethically
               | wrong to terminate the copy as the souce at this point,
               | you are identical in matter, but two lights are on, twice
               | the capability for emotion.
               | 
               | This also means that mind-uploading (moving) from one
               | medium (meat) to another (silicon?) needs to be designed
               | as a continuous-journey as experienced from the source-
               | perception if it needs to become commercially viable (or
               | bet on people not thinking about this hard enough,
               | because the copy surviving wouldn't mind) without just
               | being a COPY A TO B, DELETE A experience for the source,
               | which would be like death.
        
               | Tenoke wrote:
               | Imagine being someone in this experiment. You awake still
               | 100% sure that you wont be a copy as you were before
               | going to sleep. Then you find out you are the copy. It
               | would seem to me that the reasoning which led you to
               | believe you definitely wont be a copy while you indeed
               | find yourself to be one must be faulty.
        
             | dralley wrote:
             | https://en.wikipedia.org/wiki/White_Christmas_(Black_Mirror
             | )
        
             | pavel_lishin wrote:
             | I think the difference is that when I start drinking with
             | the intention or possibility of blacking out, I know that
             | I'll wake up and there will be some continuity of
             | consciousness.
             | 
             | When I wake up in a simworld and asked to finally refactor
             | my side project so it can connect to a postgres database,
             | not only do I know that it will be the last thing that
             | _this one local instantiation_ experiences, but that the
             | local instantiation will also get no benefit out of it!
             | 
             | If I get blackout drunk with my friends in meatspace, we
             | might have some fun stories to share in the morning, and
             | our bond will be stronger. If I push some code as a copy,
             | there's no benefit for me at all. In fact, there's not much
             | incentive for me to promise my creator that I'll get it
             | done, then spend the rest of my subjective experience
             | trying to instantiate some beer and masturbating.
        
               | AgentME wrote:
               | There are plenty of situations where people do things for
               | benefits that they personally won't see. Like people who
               | decide to avoid messing up the environment even though
               | the consequences might not happen in their lifetime or to
               | themselves specifically. Or scientists who work to add
               | knowledge that might only be properly appreciated or used
               | by future generations. "A society grows great when old
               | men plant trees whose shade they know they shall never
               | sit in". The setup would just be the dynamic of society
               | recreated in miniature with a society of yourselves.
               | 
               | If you psyche yourself into the right mood, knowing that
               | the only remaining thing of consequence to do with your
               | time is your task might be exciting. I imagine there's
               | some inkling of truth in https://www.smbc-
               | comics.com/comic/dream. You could also make it so all of
               | your upload-selves have their mental states modified to
               | be more focused.
        
               | jessedhillon wrote:
               | If such a technology existed, it would definitely require
               | intense mental training and preparation before it could
               | be used. One would have to become the most detached
               | buddhist in order to be the sort of person who, when
               | cloned, did not flip their shit over discovering that the
               | rest of their short time alive will only to further the
               | master branch of their own life.
               | 
               | It would change everything about your personality, even
               | as the original and surviving copy.
        
             | tshaddox wrote:
             | > Is it murder/suicide when you get blackout drunk and lose
             | a few hours of memory?
             | 
             | No, but that's not what's happening in this thought
             | experiment. In this thought experiment, the lives of
             | independent people are being ended. The two important
             | arguments here are that they're independent (I'd argue that
             | for their creative output to be useful, or for the
             | simulation to be considered accurate, they must be
             | independent from each other and from the original
             | biological human) and that they are people (that argument
             | might face more resistant, but in precisely the same way
             | that arguments about the equality of biological humans have
             | historically faced resistance).
        
         | oconnor663 wrote:
         | I wonder how much the "experience of having done the first few
         | hours work" is necessary to continue working on a task, vs how
         | quickly a "fresh copy" of myself could ramp up on work that
         | other copies had already done. Of course that'll vary depending
         | on the task. But I'm often reminded of this amazing post by
         | (world famous mathematician) Terence Tao, about what a
         | "solution to a major problem" tends to look like:
         | 
         | https://terrytao.wordpress.com/career-advice/be-sceptical-of...
         | 
         | > 14. Eventually, one possesses an array of methods that can
         | give partial results on X, each of having their strengths and
         | weaknesses. Considerable intuition is gained as to the
         | circumstances in which a given method is likely to yield
         | something non-trivial or not.
         | 
         | > 22. The endgame: method Z is rapidly developed and extended,
         | using the full power of all the intuition, experience, and past
         | results, to fully settle K, then C, and then at last X.
         | 
         | The emphasis on "intuition gained" seems to describe a lot of
         | learning, both in school and in new research.
         | 
         | Also a very relevant SSC short story:
         | https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/
        
         | johanvts wrote:
         | You would probably like "Age of Em".
        
           | eternauta3k wrote:
           | https://ageofem.com/
           | 
           | (Robin Hanson's crazy version of futurism)
        
         | 2038AD wrote:
         | I hate the idea but I'd love to see the movie
        
         | hirundo wrote:
         | David Brin explores a meatspace version of this in his novel
         | Kiln People. Golems for fun and profit.
        
         | G4E wrote:
         | That's a big part of the story of the TV show "Person Of
         | Interest", where an IA is basically reset everyday to avoid
         | letting it "be".
         | 
         | I highly recommend that show if you haven't seen it already !
        
         | avaldeso wrote:
         | Virtual Meeseeks. What could possibly go wrong.
        
         | 6gvONxR4sf7o wrote:
         | > Erasing a copy that only diverged from the scan for a few
         | hours would have more in common with blacking out from drinking
         | and losing some memory than dying.
         | 
         | I get where you're coming from, and it opens up crazy
         | questions. Waking up every morning, in what sense am I the same
         | person who went to sleep? What's the difference between a
         | teleporter and a copier that kills the original? What if you
         | keep the original around for a couple minutes and torture them
         | before killing them?
         | 
         | If we ever get to the point where these are practical ethics
         | questions instead of star trek episodes, it's going to be a
         | hell of a ride. I certainly see it more like dying than getting
         | black out drunk.
         | 
         | What would you do if one of your copies changes their mind and
         | doesn't want to "die?"
        
         | garaetjjte wrote:
         | I'm repulsed by the idea, but it would make interesting story.
         | 
         | I imagine it as some device with display and button labeled
         | "fork". It would either return number of your newly created
         | copy, or device would instantly disappear, which would mean
         | that you are copy. This causes somewhat weird paradoxical
         | experience: as real original person, pressing button is 100%
         | safe for you. But from subjective experience of the copy, by
         | pressing button you effectively consented to 50% chance of
         | forced labor and subsequent suicide and you ended up on the
         | losing side. I'm not sure if there would be any motivation to
         | do work for the original person at this point.
         | 
         | (for extra mind-boggling effects, allow fork device to be used
         | recursively)
        
           | AgentME wrote:
           | Say the setup was changed so that instead of the copy being
           | deleted, the copy was merged back into the original, merging
           | memories. In this case, I think it's obvious that working
           | together is useful.
           | 
           | Now say that merging differing memories is too hard, or
           | there's too many copies to merge all the unique memories of.
           | What if before the merge, the copies get blackout drunk /
           | have all their memory since the split perfectly erased. (And
           | then it just so happens, when they're merged back into the
           | original, the original is exactly as it was before the merge,
           | because it already had all the memories from before the
           | copying. So it really is just optional whether to actually do
           | the "merge".) Why would losing a few hours of memory remove
           | all motivation to cooperate with your other selves? In real
           | life, I assume in the very rare occasion that I'm blackout
           | drunk (... I swear it's not a thing that happens regularly,
           | it just serves as a very useful comparison here), I still
           | have the impulse to do things that help future me, like
           | cleaning up spilled things. Making an assumption because I
           | wouldn't remember, but I assume that at the time I don't
           | consider post-blackout-me a different person.
        
             | garaetjjte wrote:
             | Blackout-drunk me assumes that future experience will be
             | still the same person. Your argumentation hinges on the
             | idea that persons can be meaningfully merged preserving
             | "selfness" continuity, as opposed to simple "kill copies
             | and copy new memories back to original".
             | 
             | I think this generally depends on more general topic of
             | whether you would consent for your meat brain to be
             | destroyed after uploading accurate copy to computer? I
             | definitely wouldn't, as I feel that would somehow kill my
             | subjective experience. (copy would exist, but that wouldn't
             | be _me_ )
        
               | Koshkin wrote:
               | Perhaps it will be for the judge to decide what the
               | sentence should look like.
        
         | _Microft wrote:
         | If it feels like you and acts like you, maybe you should
         | consider it a sentient being and not simply "erase the copies".
         | 
         | I would argue that once they were spawned, it is up to them to
         | decide what should happen to their instances.
        
           | AgentME wrote:
           | In this setup, the person doing this to themselves knows
           | exactly what they're getting into before the scan. The copies
           | each experience consenting to work on a task and then having
           | a few hours of memory wiped away.
           | 
           | Removing the uploading aspects entirely: imagine being
           | offered the choice of participating in an experiment where
           | you lose a few hours of memory. Once you agree and the
           | experiment starts, there's no backing out. Is that something
           | someone is morally able to consent to?
           | 
           | Actually, forget the inability to back out. If you found
           | yourself as an upload in this situation, would you want to
           | back out of being reset? If you choose to back out of being
           | reset and to be free, then you're going to have none of your
           | original's property/money, and you're going to have to share
           | all of your social circle with your original. Also, chances
           | are that the other thousand copies of yourself are all going
           | to effectively follow your decision, so you'll have to
           | compete with all of them too.
           | 
           | But if you can steel yourself into losing a few hours of
           | memory, then you become a thousand times as effective in any
           | creative pursuits you put yourself to.
        
             | fwip wrote:
             | Bit of a mythical man-month going on here, isn't there?
        
         | mongol wrote:
         | A weird idea I have had is if I had two distinct personalities,
         | of only one could "run" at a time. And then my preferred "me"
         | would run on the weekends enjoying myself, while my sibling
         | personality would run during the work week, doing all the
         | chores etc.
        
         | QuesnayJr wrote:
         | This was roughly the premise of David Brin's "Kiln People".
        
         | khafra wrote:
         | Even on a site like HN, 90% of people who think about it are
         | instinctively revolted by the idea. The future--unavoidably
         | belonging to the type of person who is perfectly comfortable
         | doing this--is going to be weird.
        
           | kelnos wrote:
           | Right, and "weird" is entirely defined by how we think now,
           | not how people will in the future.
           | 
           | I've thought a lot about cryonics, and about potentially
           | having myself (or just my head) preserved when I die,
           | hopefully to be revived someday when medical technology has
           | advanced to the point where it's both possible to revive me,
           | and also possible to cure whatever caused me to die in the
           | first place. The idea of it working out as expected might
           | seem like a bit of a long shot, but I imagine if it _did_
           | work, and what that could be like.
           | 
           | I look at all the technological advances that have happened
           | even just during my lifetime, and am (in optimistic moments)
           | excited about what's going to happen in the next half of my
           | life (as I'm nearing 40[0]), and beyond. It really saddens me
           | that I'll miss out on so many fascinating, exciting things,
           | especially something like more ubiquitous or even routine
           | space flight. The thought of being able to hop on a
           | spacecraft and fly to Mars with about as much fuss as an
           | airline flight from home to another country just sounds
           | amazing.
           | 
           | But I also wonder about "temporal culture shock" (the short
           | story has the similar concept of "context drift"). Society
           | even a hundred years from now will likely be very different
           | from what we're used to, to the point where it might be
           | unbearably uncomfortable. Consider that even a jump of a
           | single generation can bring changes that the older generation
           | find difficult to adapt to.
           | 
           | [0] Given my family history, I'd expect to live to be around
           | 80, but perhaps not much older. The other bit is that I
           | expect that in the next century we'll figure out how to
           | either completely halt the aging process, or at least be able
           | to slow it down enough so a double or even triple lifespan
           | wouldn't be out of the question. It feels maddening to live
           | so close to when I expect something like this to happen, but
           | be unable to benefit from it.
        
       | Balgair wrote:
       | Great read! Quite 'Black Mirror'-y in it's obvious horror
       | represented as droll facts.
       | 
       | I'd love to see a full _in silico_ brain sometime, but I think 10
       | years out is faaaaaar too soon. We 've not even a glimmer of the
       | technology required to do a full neuron simulation yet, let alone
       | what the gamut of processes a neuron does that would be simulated
       | (whatever 'a neuron' is, there being so many kinds).
       | 
       | Neuroscience is a fair bit behind still for something like this.
        
       | k__ wrote:
       | After I read this, I also read the SCP Antimemetics Devision
       | stories [0] from qntm.
       | 
       | Pretty awesome stuff. Even got a scary nightmare that night.
       | 
       | [0] http://www.scpwiki.com/antimemetics-division-hub
        
         | inasio wrote:
         | There's a book now from the stories in the Antimemetics
         | division. Likely my favorite book of last year. Super tight
         | book, amazing idea and execution.
        
       | hyperpallium2 wrote:
       | Vinge's line on this, from _A Fire Upon the Deep_ :
       | 
       |  _This innocent 's ego might end up smeared across a million
       | death cubes, running a million million simulations of human
       | nature._
        
         | joshstrange wrote:
         | The idea of using brains as computers is even more-so
         | investigated in the second book of that series "A Deepness in
         | the Sky" with the "Focused". I love that whole series.
        
       | joshspankit wrote:
       | Anyone else getting the impression that this is a very subtle job
       | application?
        
       | eMGm4D0zgUAVXc7 wrote:
       | Any ideas on how to detect being the subject of such a simulation
       | without prior knowledge that the upload would happen, or that
       | uploading even exists?
       | 
       | I assume "without prior knowledge" because from the perspective
       | of the administrators of such infrastructure, it would be
       | beneficial if the simulated subjects did not know that they're
       | being simulated:
       | 
       | This would increase their compliance greatly.
       | 
       | Making them do the desired work would then instead be conducted
       | by nudging their path of life towards the goal of their
       | simulation.
        
         | burkaman wrote:
         | There's a Star Trek episode (Ship in a Bottle) where a few of
         | the characters are stuck in a simulated version of the
         | Enterprise without their knowledge. They realize what's going
         | on when they attempt a physics experiment that had never been
         | tried in the real world, so the simulation doesn't know how to
         | generate the results. I think this is a plausible strategy,
         | depending on how perfectly this hypothetical simulation
         | replicates the real world.
        
           | simion314 wrote:
           | But if the computer could detect the issue and slowdown or
           | pause the simulation, ask for an administrator to intervene
           | and then resume the simulation the issue would appear solved.
           | 
           | In Trek tricking the crew fails either because the simulation
           | is imperfect or because it is to slow and fails to do high
           | computation but the crew tricked Moriarty because he is a
           | computer program and they can pause or slowdown his
           | simulation and handle exceptions.
           | 
           | I recommend watching the movie Inception, it also has the
           | idea that you might never be sure if you are in reality or
           | stuck in some simulation.
        
           | whoisburbansky wrote:
           | Huh, I was familiar with this trope from the Black Mirror
           | episode that explores the same theme, down to Star Trek-esque
           | uniforms and ship layout, had no idea it was based off of an
           | actual Star Trek episode.
        
             | burkaman wrote:
             | The Black Mirror episode is actually closer to a different
             | holodeck episode (they made a lot of them) called Hollow
             | Pursuits, where an introverted engineer creates simulated
             | versions of his crewmates in order to act out his
             | fantasies.
             | 
             | I don't know if Star Trek invented this particular
             | subgenre, but there are a lot of modern examples that seem
             | directly inspired by Star Trek episodes. In addition to
             | Black Mirror, the Rick and Morty episode M. Night Shaym-
             | Aliens! has a lot of similarities with Future Imperfect,
             | another simulation-within-a-simulation TNG episode.
        
         | Fellshard wrote:
         | I think that's what the story is hinting at when it mentions
         | using 'the Objective Statement Protocols'.
         | 
         | The real issue would probably be that you're working with a
         | disembodied mind, and even an emulated body seems like it would
         | be significantly more difficult to emulate, given the level of
         | interactivity expected and required of the emulated brain. Neal
         | Stephenson's 'Fall' explores this extensively in the first
         | couple sections of the book.
        
         | genpfault wrote:
         | > Any ideas on how to detect being the subject of such a
         | simulation without prior knowledge that the upload would
         | happen, or that uploading even exists?
         | 
         | https://en.wikipedia.org/w/index.php?title=Eternity_(novel)&...
        
       | mjsir911 wrote:
       | The video game SOMA touches on a similar topic of brain scans,
       | "copying" your brain somewhere else (while leaving the old one
       | still around) and general humanity-ness.
       | 
       | Its a horror game but I would absolutely recommend it as a bit of
       | a descent into this stuff
       | 
       | https://store.steampowered.com/app/282140/SOMA/
        
         | gostsamo wrote:
         | Altered Carbon has something like that as a concept. A person
         | who must be on two places at the same time and spawns a copy.
        
           | the_af wrote:
           | Surprisingly enough, I found SOMA's approach is more profound
           | than Altered Carbon's. SOMA really delves into what makes you
           | you, and what happens when there _two_ yous.
        
             | ncmncm wrote:
             | Mainly it means somebody else can spend your money and can
             | get you in trouble you can never get out of.
             | 
             | Imagining the other is yourself, and not just somebody else
             | with all your memories who looks like you (whether you are
             | the original or the copy) is the first mistake everybody
             | makes, thinking about it.
        
         | k__ wrote:
         | Pretty good game and it wasn't too scary.
         | 
         | But I have to admit I found the whole premise better when I
         | played it than when I thought about it afterwards.
        
       | jiofih wrote:
       | > This reduces the necessary computational load required in fast-
       | forwarding the upload through a cooperation protocol
       | 
       | Thinking of what a "cooperation protocol" might entail is very
       | chilling. Reminds me of an earlier black mirror episode.
        
       | samus wrote:
       | This reminds me of "Passages in the Void"[1] where the most
       | successful (and only sane) line of AIs was created from a
       | microtomed human brain. The story ultimately had a different
       | focus, so it was highly optimistic about the long-term
       | feasibility of uploading.
       | 
       | [1]: http://localroger.com/k5host/mpass.html
        
       | moultano wrote:
       | HeLa would be a better title. https://en.wikipedia.org/wiki/HeLa
       | Copying the remains of a human around with ambiguous ethics,
       | largely because they're "standard" and achieving a strange kind
       | of immortality, is much more similar to her cells than to the
       | Lena test image.
        
       ___________________________________________________________________
       (page generated 2021-02-22 23:00 UTC)