[HN Gopher] What is intelligent life? Portia Spiders and GPT
___________________________________________________________________
What is intelligent life? Portia Spiders and GPT
Author : FrustratedMonky
Score : 37 points
Date : 2024-06-17 19:05 UTC (3 hours ago)
(HTM) web link (aeon.co)
(TXT) w3m dump (aeon.co)
| iandanforth wrote:
| If you enjoy this topic I also highly recommend "A Brief History
| of Intelligence" which goes into quite a bit of detail, is very
| readable, and ties in directly to the near term future of what
| intelligence will mean in our world. Really a very good book!
| FrustratedMonky wrote:
| Ever since reading Children of Time, have never thought of Portia
| Spiders the same way again. Read it back to back with Blindsight,
| and it really shifted my view of consciousness.
|
| And even current societies concepts of male/female. In Children
| of Time the Portia Spiders are Female dominant, and eat the
| males. Scale that up to an intelligent society and it was pretty
| interesting perspective change to see female characters debating
| if they should go hunt some males for dinner. And later
| discussing 'rights' and "of course males can't be equal, can't
| they just be happy if we let them live.."
| joshstrange wrote:
| I hate spiders (yes, I know the value they provide, I can't get
| over how much they creep me out) but I loved the Children of
| Time series and also I just re-read A Deepness in the Sky
| (Vernor Vinge) which is also a favorite of mine. I'm not sure I
| could handle a movie of either but the books are great since I
| can forget they are "spiders".
| joshmarlow wrote:
| Well I loved A Deepness in the Sky and Blindsight so now I'm
| adding Children of Time to my ever growing list.
| romaintailhurat wrote:
| A very nice book indeed, and i also recommend the following
| tomes
| ben_w wrote:
| I suspect that A Deepness in the Sky would work best as
| animation, to emphasise the same plot point that had the
| spiders using familiar human nouns and only reveal the
| _graphical_ truth at the same time in the story we find out
| the _literary_ truth.
|
| (I vaguely remember something like this happening with a
| monster's POV section in Schlock Mercenary, but that's a long
| web comic and I can't remember when in it's history to look
| for).
| AnimalMuppet wrote:
| The arc starts here:
| https://www.schlockmercenary.com/2001-10-01
|
| It switches POV here:
| https://www.schlockmercenary.com/2001-10-22
| ben_w wrote:
| Yup, that's the one I had in mind.
| phaedrus wrote:
| "Instead of a measurable, quantifiable thing that exists
| independently out in the world, we suggest that intelligence is a
| label." Thought experiments lead me to the conclusion that the
| same is true of consciousness. I make the analogy to how the
| essay, What Color Are Your Bits? which argues that copyright is
| "out of band" (my summary) from the actual bits of data. I think
| the property of a system having consciousness is, like copyright
| of data, neither intrinsic nor an epiphenomenon but rather at
| least partially a status of how we regard it.
|
| The thought experiment goes thus:
|
| Assume it's possible to simulate a conscious brain with a
| deterministic program. (Already a big ask for some, but this is
| one of the axioms of my argument so bear with me.) Assume that if
| embodied the simulation would be indistinguishable from the
| original person, but also if not embodied it could be interacted
| with in a simulated environment.
|
| If it is deterministic, that means if you re-ran the simulation
| with same data , inputs, and timing, all would proceed
| identically. Is the simulation conscious again while replaying it
| along a fixed path?
|
| Suppose you memoized portions of the computation. How much of the
| brain could you memoize before you no longer consider it
| conscious?
|
| Suppose you could, instead of memoizing "in breadth", instead use
| memoization to skip steps of the consciousness without changing
| the outcome. How many in-between states could you gloss over
| while still considering the simulation conscious?
|
| Suppose you divided the simulation among multiple computers,
| without changing the outcome. If you accept the original premise,
| you may have little trouble accepting a distributed system with a
| fast network is also capable of hosting a conscious mind. However
| as we said this replay is deterministic, there's nothing stopping
| nodes from substituting internal playback of pre-recorded network
| packets for actually communicating on the network.
|
| Are you prepared to call it a simulation of consciousness if a
| "distributed system" of nodes each internally simulating all of
| it, in portions, while all remaining silent and _not_ sending any
| network packets between them?
|
| My point with all of these variations is not to say which is or
| isn't conscious, it's to argue there is no clear dividing line.
| One could come up with moral arguments about which of these
| scenarios it's unethical to keep trapped in this experimental
| setup versus which are the mere image of a thing and not the
| thing itself, but that's my point: it's only a moral/ethical
| dilemma and not a physical or informational state change between
| non-conscious and conscious. The "universe doesn't care;"
| consciousness is not a conserved property.
| FrustratedMonky wrote:
| "Assume it's possible to simulate a conscious brain with a
| deterministic program."
|
| I think this is touching on the 'philosophical zombie'
| arguments.
|
| If something is crated like a robot/ai, that is completely
| indistinguishable from a human, is that thing even possible
| without having some inner subjective experience? It seems like
| people fall into two camps, if it is a machine, then of course
| it is a zombie, no inner life. The second camp that says the
| machine is conscious. The problem is from outside, you can't
| really prove it either way because the premise is that it is
| indistinguishable.
|
| So, I think in your example, it is interesting extrapolation,
| the 'invention' is indistinguishable and deterministic, now
| what if we split it up, compress it, etc.. when would that
| inner experience go? or would it end at some point?
|
| Of course, if a consciousness could be played back and forth
| like this, then that would be argument for GPT to have some
| consciousness .
| pavel_lishin wrote:
| > _I think this is touching on the 'philosophical zombie'
| arguments._
|
| As well as Buridan's ass:
| https://en.wikipedia.org/wiki/Buridan%27s_ass
| MattPalmer1086 wrote:
| These ideas are explored in Permutation City, by Greg Egan.
|
| https://en.m.wikipedia.org/wiki/Permutation_City
| lucubratory wrote:
| Some of those thought experiments were interesting, but I still
| landed on "That's still a conscious system" for all of them. I
| expected you to go much further with the permutations.
| jebarker wrote:
| This is a wonderful essay.
|
| I've always found the term AGI confusing. For example, how
| general does it need to be to qualify and what specific cognitive
| capabilities does it need to exhibit? My gut feeling has always
| been that it's not a helpful guiding star for AI research.
|
| It seems better to be led by the problems we want to solve as
| that makes it easier to define success and generality is still
| beneficial since you can more efficiently solve more problems
| with general solutions. What this essay solidified for me is that
| what we call intelligence is really just a set of tools that
| allow us (humans) to solve a certain collection of problems. We
| mistakenly believe all those tools are cognitive, but really some
| are just evolved responses and instincts.
___________________________________________________________________
(page generated 2024-06-17 23:02 UTC)