[HN Gopher] AI, Heidegger, and Evangelion
___________________________________________________________________
AI, Heidegger, and Evangelion
Author : jger15
Score : 157 points
Date : 2025-05-24 14:26 UTC (1 days ago)
(HTM) web link (fakepixels.substack.com)
(TXT) w3m dump (fakepixels.substack.com)
| dtagames wrote:
| This is fantastic. Perhaps the best philosophical piece on AI
| that I've read.
| DiscourseFan wrote:
| I need to write something, then.
| smokel wrote:
| Please do. It is unfortunate that those who shout hardest are
| not necessarily the smartest. There is a lot of nonsense
| being put out there, and it would be refreshing to read
| alternative perspectives.
|
| Edit: this was a generic comment, not judging the article. I
| still have trouble understanding what its premise is.
| httbs wrote:
| Great read
| neuroelectron wrote:
| My chatgpt doesn't like nyc that much:
|
| New York City, as a global symbol, exports the myth of America--
| its exceptionalism, hustle culture, capitalism-as-dream, fashion,
| Wall Street bravado, media dominance, cultural swagger. NYC has
| always been a billboard for "brand America," selling a narrative
| of limitless opportunity, grit-as-glory, and urban
| sophistication. Think Times Square's overstimulation, Broadway's
| fantasy, Wall Street's speculation, and how these are consumed
| worldwide as aspirational content.
|
| But what's exported isn't necessarily real--it's hype. The
| marketed dream, not the lived reality.
|
| "...and its biggest import is grime and grief"
|
| In contrast, what flows into NYC is the cost of that image: the
| labor of the marginalized, the psychological toll, the physical
| debris. "Grime" evokes literal pollution, overwork, and class
| stratification; "grief" brings in the emotional fallout--
| displacement, burnout, violence, economic precarity, and cycles
| of trauma.
|
| NYC absorbs the despair of a world it pretends to uplift.
| Refugees, artists, outcasts, and exhausted believers in the
| American Dream all converge here, only to be consumed by the very
| machine that exports the myth of hope.
| TimorousBestie wrote:
| > Instead, Heidegger compels us to do something much harder: to
| see the world as it is being reframed by technology, and then to
| consciously reclaim or reweave the strands of meaning that risk
| being flattened.
|
| As a call to action this is inadequate. I have no idea what this
| is persuading me to do.
|
| If I dig into how Heidegger solved this problem in his own life,
| well, I don't think that should be replicated.
| daseiner1 wrote:
| "The Question Concerning Technology" [1] mentioned in this
| piece is dense but can be understood by the literate layman, I
| think, with patience and a bit of work.
|
| Re: "call to action", part of Heidegger's project by my read is
| to interrogate such phrases. I think he would refute that
| "action" is what we need and that orienting ourselves towards
| the world in terms of "action" is obscuring the Question of
| Being. He himself offers no real way out. In his posthumously
| published _Der Spiegel_ interview [2] he himself says "only a
| God can save us".
|
| I assume you're making a snide reference to his involvement
| with Nazism, which I'm not going to attempt to downplay or
| respond to here. He himself in his later life, however, went
| and lived a humble life in the Black Forest. Can or should we
| all "return to the land"? No. But his writing certainly has
| expanded my view of the world and my "image of thought". He is
| a worthwhile study.
|
| _How to Read Heidegger_ [3] is a great primer for any who may
| be interested.
|
| [1]
| https://www2.hawaii.edu/~freeman/courses/phil394/The%20Quest...
|
| [2] https://www.ditext.com/heidegger/interview.html
|
| [3] https://a.co/d/dK5dp2t
|
| P.S. just noticed/remembered that my username is a Heidegger
| reference. heh.
| gchamonlive wrote:
| Leave passivity behind. You should put work into understanding
| what's required of you. This is why it's so easy to fall back
| to passivity, but there are things that must be done just
| because it's the right thing to do. Doing anything other than
| that is akin to committing philosophical suicide.
| TimorousBestie wrote:
| I don't think I have an ethical duty to parse obscurantist
| nonsense.
| daseiner1 wrote:
| Contempt prior to investigation ought not be a point of
| pride, I think.
| viccis wrote:
| There's nothing obscure about that. You might be out of the
| habit of consuming challenging material, but it's
| definitely not a good response to react reflexively with
| contempt for something that takes a moment of thought to
| understand. There's already enough vulgar anti-
| intellectualism in society right now without adding to it.
| gchamonlive wrote:
| If that's obscurantist nonsense you aren't going to get far
| with Heidegger.
| 13years wrote:
| > AI is not inevitable fate. It is an invitation to wake up. The
| work is to keep dragging what is singular, poetic, and profoundly
| alive back into focus, despite all pressures to automate it away.
|
| This is the struggle. The race to automate everything. Turn all
| of our social interactions into algorithmic digital bits.
| However, I don't think people are just going to wake up from
| calls to wake up, unfortunately.
|
| We typically only wake up to anything once it is broken. Society
| has to break from the over optimization of attention and
| engagement. Not sure how that is going to play out, but we
| certainly aren't slowing down yet.
|
| For example, take a look at the short clip I have posted here. It
| is an example of just how far everyone is scaling bot and content
| farms. It is an absolute flood of noise into all of our knowledge
| repositories. https://www.mindprison.cc/p/dead-internet-at-scale
| verisimi wrote:
| > However, I don't think _people_ are just going to wake up
| from calls to wake up, unfortunately.
|
| > _We_ typically only wake up to anything once it is broken.
| _Society_ has to break from the over optimization of attention
| and engagement.
|
| I don't think anyone will be waking up as long as their
| pronouns are 'we' and 'us' (or 'people', 'society'). Waking up
| or individuation is a personal, singular endeavour - it isn't a
| collective activity. If one hasn't even grasped who one is, if
| one is making a category error and identifies as 'we' rather
| than 'I', all answers will fail.
| pixl97 wrote:
| The Culture dives into this concept with the idea of
| hegemonizing swarms, and Bolstrom touches on this with
| optimizing singletons.
|
| Humans are amazing min/maxers, we create vast, and at least
| temporarily productive mono cultures. At the same time a
| scarily large portion of humanity will burn and destroy
| something of beauty if it brings them one cent of profit.
|
| Myself I believe technology and eventually AI were our fate
| once we became intelligence optimizers.
| 13years wrote:
| > Myself I believe technology and eventually AI were our fate
| once we became intelligence optimizers.
|
| Yes, everyone talks about the Singularity, but I see the
| instrumental point of concern to be something prior which
| I've called the Event Horizon. We are optimizing, but without
| any understanding any longer for the outcomes.
|
| "The point where we are now blind as to where we are going.
| The outcomes become increasingly unpredictable, and it
| becomes less likely that we can find our way back as it
| becomes a technology trap. Our existence becomes dependent on
| the very technology that is broken, fragile, unpredictable,
| and no longer understandable. There is just as much
| uncertainty in attempting to retrace our steps as there is in
| going forward."
| pixl97 wrote:
| >but without any understanding any longer for the outcomes.
|
| A concept in driving where your braking distance exceeds
| your view/headlight range at any given speed. We've stomped
| on the accelerator and the next corner is rather sharp.
|
| Isaac Asimov did a fictional version of this in the
| Foundation trilogy.
| 13years wrote:
| Yes, that's an excellent description.
| psychoslave wrote:
| That's a very idealistic view to believe there ever was
| something as a point were some people had a really more
| clear and precise representation which was accurate of what
| was going to come.
| gsf_emergency wrote:
| Iirc Eva's Instrumentality comes from Cordwainer-Smith..
|
| Subtlety missed by TFA but not necessarily Eva:
| Government is meant to be the Instrument (like AI), NOT
| the people/meat.
|
| https://en.wikipedia.org/wiki/Instrumentality_of_Mankind#
| Cul...
|
| Eva's optimization: Human instrumentality=> self
| governance. Which is not that ambiguous, one could say,
| less so than Star Child (2001)
|
| "Of the people" vs <of the people>
| exe34 wrote:
| > We are optimizing, but without any understanding any
| longer for the outcomes.
|
| That's how the Vile Offspring is born.
| jsosshbfn wrote:
| I feel confident humans could not define intelligence if
| their survival depended on it.
|
| Tbh, the only thing I see when looking at Terminator is a
| metaphor for the market. It makes more sense than any literal
| interpretation
| jfarmer wrote:
| John Dewey on a similar theme, about the desire to make
| everything frictionless and the role of friction. The fallacy
| that because "a thirsty man gets satisfaction in drinking
| water, bliss consists in being drowned."
|
| > The fallacy in these versions of the same idea is perhaps the
| most pervasive of all fallacies in philosophy. So common is it
| that one questions whether it might not be called _the_
| philosophical fallacy. It consists in the supposition that
| whatever is found true under certain conditions may forthwith
| be asserted universally or without limits and conditions.
|
| > Because a thirsty man gets satisfaction in drinking water,
| bliss consists in being drowned. Because the success of any
| particular struggle is measured by reaching a point of
| frictionless action, therefore there is such a thing as an all-
| inclusive end of effortless smooth activity endlessly
| maintained.
|
| > It is forgotten that success is success of a specific effort,
| and satisfaction the fulfilment of a specific demand, so that
| success and satisfaction become meaningless when severed from
| the wants and struggles whose consummations they are, or when
| taken universally.
| dwaltrip wrote:
| Our societies and our people are overdosing on convenience
| and efficiency.
| the_af wrote:
| Agreed.
|
| I remember a few years back, here on HN everyone was
| obsessed with diets and supplements and optimizing their
| nutrients.
|
| I remember telling someone that eating is also a cultural
| and pleasurable activity, that it's not _just_ about
| nutrients, and that it 's not always meant to be optimized.
|
| It wasn't well received.
|
| Thankfully these days that kind of posts are much less
| common here. That particular fad seems to have lost its
| appeal.
| unionjack22 wrote:
| Oh yeah, it's both funny and understandable how we've
| swung from the mania of huel-esque techbro belief of
| nutrition to the current holistic eating "beef tallow"
| and no-seed oils movement. I think we realized guzzling
| slop alone is spiritually empty.
| juenfift wrote:
| > However, I don't think people are just going to wake up from
| calls to wake up, unfortunately.
|
| > We typically only wake up to anything once it is broken.
| Society has to break from the over optimization of attention
| and engagement.
|
| > However, I don't think people are just going to wake up from
| calls to wake up, unfortunately.
|
| But here is the thing, we cannot let this bleak possibility
| occur.
|
| It is morally wrong essentially for lack of a better phrase:
| "sit on our asses and do nothing"
|
| Now I am aware that society has hit the accelerator and we are
| driving to a wall, however call me naive idiotic optimist, in
| fact call me a fool, and idiot, a "fucking r**ed wanker" as one
| called me long ago for all I damn care. But I am a fervent
| believer that we can change, we can stop this.
|
| This has to stop because this is morally wrong to just let it
| happen, we've got to stop this? How I'm not sure, but I'm know
| for certain we have to start somewhere.
|
| Because it's the right option.
| jwalton wrote:
| It's somehow a little poetic that the author's chatgpt example
| has already been plagiarized by a realtor blog:
| https://www.elikarealestate.com/blog/beautiful-suffering-of-...
| Garlef wrote:
| This made me happy: It offers an interesting take on AI.
|
| (After reflecting a bit on this I think this is for the following
| reason: Not only does this take a step back to offer a meta
| perspective. It also does so without falling into the trap of
| rooting this perspective in the hegemonic topos of our everyday
| discourse (economics).
|
| Usually, takes on AI are very economic in nature: "Gen AI is
| theft", "We/our jobs/our creativity will all be replaced", "The
| training data is preduced by exploiting cheap labour".
|
| In this sense this perspective avoids the expected in not only in
| one but two ways.)
| deadbabe wrote:
| We never valued the human element in the work that surrounds us.
| Do you care that the software engineer who produced the CRUD app
| you use everyday had a "craftsman mentality" toward code? Do you
| care about the hours a digital artist spent to render some CGI
| just right in a commercial? Do you appreciate the time a human
| took to write some local news article?
|
| Probably not, you probably didn't even notice, and now it's over.
| It's too late to care. These things will soon be replaced with
| cheaper AI pipelines and much of what we consume or read
| digitally will be proudly AI generated or at best only merely
| suspected of being AI generated. Did you know that soon you'll
| even be able to install browser plugins that will automatically
| pay you to have AI insert ads into comments you write on popular
| websites? It's true, and people will do it, because it's an easy
| way to make money.
|
| Reversing this AI trend means everyone should just do things the
| hard way, and that's just not going to happen. If no one cares
| about how you do your work (and they really don't give a fuck)
| you might as well use AI to do it.
| micromacrofoot wrote:
| that's not the point at all, the question is that in the face
| of this inevitability of slop how do we create meaning for
| ourselves
| deadbabe wrote:
| By choosing to do things the hard way. There is no meaning
| without struggle. And keep in mind some of that struggle will
| mean accepting the fact that others using AI will surpass
| you, and even be praised.
| codr7 wrote:
| Surpass on a path that leads in circles, sooner or later
| nothing will work and few will remember how to use their
| brains for anything beyond slop prompting.
| akomtu wrote:
| AI will be the great divider of humanity. It's one of those ideas
| that can't be ignored or waited out, and everyone will need to
| take a decisive stance on it. Human civilization and machine
| civilization won't coexist for long.
| keiferski wrote:
| A bit of a rambling essay without much depth, but it did make me
| wonder: if AI tools weren't wrapped in a chatbot pretending to be
| a person, would all of this hullabaloo about AI and human nature
| go away? That seems to be the root of the unease many people have
| with these tools: they have the veneer of a human chatting but
| obviously aren't quite there.
|
| I tend to treat Ai tools as basically just a toolset with an
| annoying chat interface layered on top, which in my experience
| leads me to _not_ feel any of the feelings described in the essay
| and elsewhere. It's just a tool that makes certain outputs easier
| to generate, an idea calculator, if you will.
|
| As a result, I'm pretty excited about AI, purely because they are
| such powerful creative tools - and I'm not fooled into thinking
| this is some sort of human replacement.
| Avicebron wrote:
| I wonder if investors would have dumped the same amount of
| money in if it was pitched as something like "Semantic encoding
| and retrieval of data in latent space" vs "hey ex-machina
| though"
| keiferski wrote:
| Definitely, whether they intrinsically believe it or not, the
| hunt for AGI is driving a lot of funding rounds.
| goatlover wrote:
| They think it will eliminate most payroll costs while
| driving productivity way up, without destroying the economy
| or causing a revolution. They also don't take worries about
| misaligned AGI very seriously.
| FridgeSeal wrote:
| I have to wonder if they've really thought through the
| consequences of their own intentions.
|
| > eliminate payroll costs
|
| And what do they think everyone who's been shunted out of
| a livelihood is going to do? Roll over and die? I don't
| see "providing large scale social support" being present
| in any VC pitch deck. How do they imagine that "without
| destroying the economy" will happen in this scenario?
| ambicapter wrote:
| > if AI tools weren't wrapped in a chatbot pretending to be a
| person
|
| I don't think this is the central issue, considering all the
| generative AI tools that generates art pieces, including
| various takes on the cherished styles of still-living artists.
| keiferski wrote:
| Right, but then the conversation would mostly be akin to
| photography replacing portraiture - a huge technological
| change, but not one that makes people question their
| humanity.
| numpad0 wrote:
| It's remarks like these that strengthen my suspicion, that
| continuous exposure to AI output causes model collapse in
| humans too.
| gavmor wrote:
| Yes, it would--the dialogic interface is an anchor weighing us
| down!
|
| Yes, yes, it's an accessible demonstration of the technology's
| mind-blowing flexibility, but all this "I, you, me, I'm"
| nonsense clutters the context window and warps the ontology in
| way that introduces a major epistomological "optical illusion"
| that exploits (inadvertently?) a pretty fundamental aspect of
| human cognition--namely our inestimably powerful faculty for
| "theory of mind."
|
| Install the industrial wordsmithing assembly line behind a
| brutalist facade--any non-skeumorphic GUI from the past 20
| years aughta do.
|
| Check out eg https://n8n.io for a quick way to plug one-shot
| inference into an ad-hoc pipeline.
| niemandhier wrote:
| I fully agree, where do we get the training data to create a
| base model? Where do we source the terabyte of coherent text
| that is devoid of ego?
| numpad0 wrote:
| it's just not there. Whether it's text, chat, images, video -
| the quality sometimes _appear_ regular to cursory look, but
| aren 't actually up there with that of humans. That's the
| problem.
|
| ---
|
| This gets into cheap scifi territory but: I think someone
| should put people in fMRI and make them watch live archive
| recordings of something consistent and devoid of meanings, like
| a random twitch channels on same theme, or foreign language
| sports, and sort results by date relative to time of
| experiment.
|
| Expected result would be that the data is absolutely random,
| maybe except for something linked to seasons or weather, and/or
| microscopic trend shift in human speech, and/or narrator skill
| developments. But what if it showed something else?
| djoldman wrote:
| 1. I suspect that the vast majority couldn't care less about the
| philosophical implications. They're just going to try to adapt as
| best they can and live their lives.
|
| 2. LLMs are challenging the assumption, often unvoiced, that
| humans are special, unique even. A good chunk of people out there
| are starting to feel uncomfortable because of this. That LLMs are
| essentially a distillation of human-generated text makes this
| next-level ironic: occasionally people will deride LLM output...
| In some ways this is just a criticism of human generated text.
| Barrin92 wrote:
| > LLMs are challenging the assumption, often unvoiced, that
| humans are special, unique even. A good chunk of people out
| there are starting to feel uncomfortable because of this.
|
| I'm gonna be honest, after Copernicus, Newton and Darwin it's a
| bit hilarious to think that this one is finally going to do
| that worldview in. If you were already willing to ignore that
| we're upright apes, in fact not in the center of the universe
| and things just largely buzz around without a telos I'd think
| you might as well rationalize machine intelligence somehow as
| well or have given up like 200 years ago
| pglevy wrote:
| Just happened to read Heidegger's Memorial Address this morning.
| Delivered to a general audience in 1955, it is shorter and more
| accessible. Certainly not as complex as his later works but
| related.
|
| > Yet it is not that the world is becoming entirely technical
| which is really uncanny. Far more uncanny is our being unprepared
| for this transformation, our inability to confront meditatively
| what is really dawning in this age.
|
| https://www.beyng.com/pages/en/DiscourseOnThinking/MemorialA...
| (p43)
| bowsamic wrote:
| This is basically my experience. LLMs have made me deeply
| appreciative of real human output. The more technology degrades
| everything, the clearer it shows what matters
| 13years wrote:
| I think it is creating a growing interest in authenticity among
| some. Although, it still feels like this is a minority opinion.
| Every content platform is being flooded with AI content. Social
| media floods it into all of my feeds.
|
| I wish I could push a button and filter it all out. But that's
| the problem we have created. It is nearly impossible to do. If
| you want to consume truly human authentic content, it is nearly
| impossible to know. Everyone I interact with now might just be
| a bot.
| vunderba wrote:
| I'm not sure how this affects the premise of the article, but the
| "jaw dropping quote created by an LLM about NYC" reads like so
| much pretentious claptrap: Because New York City
| is the only place where the myth of greatness still feels
| within reach--where the chaos sharpens your ambition, and every
| street corner confronts you with a mirror: who are you
| becoming? You love NYC because it gives shape to your
| hunger. It's a place where anonymity and intimacy coexist;
| where you can be completely alone and still feel tethered
| to the pulse of a billion dreams.
|
| If I read this even before ChatGPT was a mote in the eye of
| Karpathy, my eyes would have rolled so far back that
| metacognitive mindfulness would have become a permanent passive
| ability stat.
|
| The author of Berserk said it so much better: _" Looking from up
| here, it's as if each flame were a small dream, for each person.
| They look like a bonfire of dreams, don't they? But, there's not
| flame for me here. I'm just a temporary visitor, taking comfort
| from the flame."_
| antithesizer wrote:
| >reads like so much pretentious claptrap
|
| >my eyes would have rolled so far back that metacognitive
| mindfulness would have become a permanent passive ability stat
|
| Yes, I agree it is impressively lifelike, just like it was
| written by a real flesh-and-blood New Yorker.
| righthand wrote:
| I think it reads like an ad for what people think a real New
| Yorker sounds like. No one's sticking around for you to
| ramble about the social weather.
| randallsquared wrote:
| I lived in NYC for a few years, and there are definitely
| lots of New Yorkers who would find themselves described in
| the quoted passage.
| antithesizer wrote:
| Feeling implicated?
| keysdev wrote:
| Or it can be a piece from the New Yorker
| echelon wrote:
| > reads like so much pretentious claptrap
|
| Somewhere out there, there's an LLM that reads exactly like us
| pretentious HN commentators. All at once cynical, obstinate,
| and dogmatic. Full of the same "old man yells at cloud" energy
| that only a jaded engineer could have. After years of scar
| tissue, overzealous managers, non-technical stakeholders,
| meddling management, being the dumping ground for other teams,
| mile-high mountains of Jira tickets.
|
| Techno-curmudgeonism is just one more flavor that the LLMs will
| replicate with ease.
| evidencetamper wrote:
| Different from your comment, that comes from your insight
| from your lived experiences in this community, the quote is a
| whole bunch of nothing.
|
| Maybe the nothingness is because LLM can't reason. Maybe it's
| because it was trained a bit too much in marketing speak and
| corporate speak.
|
| The thing is, while LLMs can replicate this or that style,
| they can't actually express opinions. LLM can never not be
| fluff. Maybe different types of AI can make use of LLM as
| part of their components, but while we keep putting all of
| our tokens in LLMs, we will forever get slop.
| echelon wrote:
| > LLM can't reason
|
| > we will forever get slop
|
| I think the New York quote is a lot better than whatever I
| wrote. A human saw that diamond in the rough and decided to
| share it, and that was the transformative moment.
|
| A human curator of AI outputs, a tastemaker if you will,
| can ensure we don't see the sloppy outputs. Only the
| sublime ones that exceed expectations.
|
| I suspect that artists and writers and programmers will
| pick and choose the best outputs to use and to share. And
| then they do, that their AI-assisted work will stand above
| our current expectations of whatever limitations we think
| AI has.
| jsosshbfn wrote:
| Eh, I'll take it any day over the cocaine-fuled
| technooptimism
| bawolff wrote:
| I'd say it reads like propaganda for a place i do not live.
|
| Sounds great to the target audience but everyone else just
| rolls their eyes.
| geon wrote:
| Yes. I realize some people like places like that. You
| couldn't pay me to live there.
| barrenko wrote:
| Or a Ghost in the Shell perspective.
| atoav wrote:
| Sounds to me like a description of what it feels like to live
| in any sort of metropolis. Georg Simmel, one of the first
| sociologists, wrote similar observations on how city dwellers
| (in 1900s Berlin) would mentally adapt themselves within the
| context of this city in his 1903 work >>The Metropolis and
| Mental Life<<.
|
| The anonymity and (from the POV of a country pumpkin: rudeness)
| is basically a form of self-protection, because a person would
| go mad if they had to closely interact with each human being
| they met. Of course this leads to its own follow-up problems of
| solitude within a mass of people etc.
|
| But if you e.g. come from a close-knit rural community with
| strict (religious?) rules, a typical rural chit-chat and
| surveillance, simething like the anonymity of a city can indeed
| feel like a revalation, especially if you see yourself as an
| outcast within the rural community, e.g. because you differ
| from the expected norms there in some way.
|
| I am not sure about the English translation of Simmels text as
| I read it in German, but as I remember it it was quite on
| point, not too long and made observations that are all still
| valid today, recommended read.
| wagwang wrote:
| I see this "I can detect AI bcuz its cringe" view everywhere
| but all the double blind tests show that people like AI poetry
| and AI writing slightly more than the greats. Also the other
| comment is spot on in that this reads exactly like some op-ed
| articles in nyt's lol.
| Hammershaft wrote:
| Domain experts seem to routinely do better at picking out
| slop, but I'd like an actual study. I was unimpressed with
| the AI art turing test.
| roxolotl wrote:
| This piece addresses the major thing that's been frustrating to
| me about AI. There's plenty else to dislike, the provenance, the
| hype, the potential impacts, but what throws me the most is how
| willing many people have been to surrender themselves and their
| work to generative AI.
| HellDunkel wrote:
| This is the most perplexing thing about AI. And it is not just
| their own work but also every product of culture they love,
| which they are ready to surrender for beeing ,,ahead of the
| curve" in very shallow ways.
| myaccountonhn wrote:
| There are quite a few practical problems that bother me with AI:
| centralization of power, enablement of fake news, AI porn of real
| women, exploitation of cheap labour to label data, invalid AI
| responses from tech support, worse quality software, lowered
| literacy rates, the growing environmental footprint.
|
| The philosophical implications are maybe the least of my worries,
| and maybe a red herring? It seems like the only thing those in
| power are interested in discussing while there are very real
| damages being done.
| 13years wrote:
| A philosophical lens can sometimes help us perceive the root
| drivers of a set of problems. I sometimes call AI humanity's
| great hubris experiment.
|
| AI's disproportionate capability to influence and capture
| attention versus productive output is a significant part of so
| many negative outcomes.
| niemandhier wrote:
| In Heideggers philosophy objects and people are defined by their
| relations to the real world, he calls it " in der Welt sein".
|
| Llms pose an interesting challenge to this concept, since they
| cannot interact with the physical world, but they nevertheless
| can act.
| throwawaymaths wrote:
| "When an LLM "describes the rain" or tries to evoke loneliness at
| a traffic light, it produces language that looks like the real
| thing but does not originate in lived experience"
|
| does it not originate in the collective experience ensouled in
| the corpus it is fed?
| bombdailer wrote:
| Meanwhile this article is quite clearly written (or cleaned up)
| by AI. Or perhaps too much dialectic with AI causes one to
| internalize its tendency to polish turds smooth. It just has the
| unmistakable look of something where the original writing was
| "fixed up" and what remains is exactly the thing is warns
| against. I understand the pull to get an idea across as
| efficiently as possible, but sucking the life out of it ain't the
| way.
| poopiokaka wrote:
| Clearly written by AI
| johnea wrote:
| > What unsettles us about AI is not malice, but the vacuum where
| intention should be. When it tries to write poetry or mimic human
| tenderness, our collective recoil is less about revulsion and
| more a last stand, staking a claim on experience, contradiction,
| and ache as non-negotiably ours.
|
| Vacuous BS like this sentence make me think this whole article is
| LLM generated text.
|
| What unsettles me isn't some existential "ache", it isn't even
| the LLM tech itself (which does have _some_ very useful
| applications), it's the gushing, unqualified anthropomorphization
| by people who aren't technically qualified to judge it.
|
| The lay populous is all gaga, while technically literate people
| are by and large the majority of those raising red flags.
|
| This topic of the snippet about "life in NYC": is the perfect
| application for the statistical sampling and reordering of words,
| already written by OTHER PEOPLE.
|
| We can take the vacuous ramblings of every would-be poet who ever
| moved to NYC, and reorder them into a "new" writing about some
| subjective topic that can't really be pinned down as correct or
| not. Of course it sounds "human", it was trained on preexisting
| human writing. duh...
|
| Now, try to apply this to the control system for your local
| nuclear power plant and you definitely will want a human expert
| reviewing everything before you put it into production...
|
| But does the c-suite understand this? I doubt it...
| layer8 wrote:
| > 2003 Space Odyssey
|
| ???
| gkanai wrote:
| yeah that made me stop too. Who, writing on these topics, does
| not know that the movie is 2001: A Space Odyssey.
| BlueTemplar wrote:
| And just how do you know LLMs don't have a soul, hmm ?
|
| "Uncanny valley" is interesting here because I am pretty sure I
| would have failed this "Turing Test" if stumbling on this text
| out of context. But yeah, within context, there is something of
| this rejection indeed... And there would be probably a lot more
| acceptance of AIs if they were closer to humans in other aspects.
| alganet wrote:
| > In Neon Genesis Evangelion, the "Human Instrumentality Project"
| offers to dissolve all suffering through perfect togetherness.
|
| That is what Gendo says, and it is obviously a lie. It's an
| _unreliable universe_ story: you don't really know anything. Even
| the most powerful characters lack knowledge of what is going on.
|
| All the endings, of all revisions, include Gendo realizing he
| didn't knew something vital (then, after that, the story becomes
| even more unreliable). If that's the goal of the story (Gendo's
| arc of pure loss despite absolute power), it's not ambiguous at
| all.
|
| So, very strange that you used the reference to relate to AI.
| aspenmayer wrote:
| Yeah, I would agree. NGE is like a reverse Garden of Eden
| story, where the Adam and Eve escape from society to some kind
| of idealistic paradise because they were the ones chosen by
| humans who essentially had created a modern day reverse Tower
| of Babel to become gods (remember Tokyo-3 is underground
| skyscrapers that can pop above ground?), and created their own
| fake plagues in the form of the Angels as justification to
| create the EVAs, which were merely a cover story to fight their
| fake plagues, but which were actually necessary to trigger the
| Human Instrumentality Project, so that they could become gods
| for real.
|
| NGE is an allegory for our present, with something like 9/11
| truther government coverup false flag attack paranoia, in the
| form of the one kid who always has a VHS camcorder, who is kind
| of a stand in for a conspiracy theorist who is accidentally
| correct, combined with Christian apocalyptic eschatology,
| combined with Japanese fears about being the only non-nuclear
| armed modern democracy in some hypothetical future, and some
| mecha fights and waifus. It's us, the little people versus
| industrial gods.
|
| Gendo was a true believer, he just became jealous of his own
| son because Shinji was able to pilot the EVAs, and thus was
| forced to confront his own poor treatment of Shinji. Once Gendo
| realized that S.E.E.L.E. (the U.N. group formed to fight the
| Angels) may not know what they're doing, before they can
| trigger Instrumentality with themselves via the synthetic
| Angels, Gendo triggers Instrumentality with Shinji and Asuka.
| So in that way, I would say that Gendo was lying because he
| wanted to trigger Instrumentality himself so he could bring
| back his dead wife, but had to settle for indoctrinating his
| son to do it by proxy.
|
| Gendo was lying, but not about the fact that the Human
| Instrumentality Project does what it says on the tin, but about
| how many eggs he had to break to make that omelet. Rather than
| trust Instrumentality and thus the literal future of humanity
| to literal faceless bureaucrats, Gendo put his son on the
| throne and told him to kill their false gods and become one
| himself, and trusted that love would conquer all in the end.
| Gendo lied to Shinji so that he could tell him a deeper truth
| that he could never say aloud in words, especially after the
| loss of his wife, that he loved his son, and he did that by
| allowing Shinji to create a world without pain, whatever that
| meant to him. Gendo was a flawed man and a genius who was duped
| to become a useful idiot for the deep state, a true believer of
| his own bullshit, but he loved his son in his extremely
| stereotypically warped Japanese way, and because his son was
| able to accept that love and learn to love himself, Shinji was
| able to love the world and his place in it, and thus achieved
| enlightenment via his realization that heaven was on earth all
| along.
|
| "God's in his heaven, all is right with the world," indeed.
|
| If anything, AI is part of what might one day become our own
| Human Instrumentality Project, but in and of itself, I don't
| think it's enough. AIs aren't yet effectively embodied.
|
| I think Final Fantasy VII would be a better story/setting to
| explore for ideas related to AI. Sephiroth is a "perfect"
| synthetic human, and he basically turns himself into a
| paperclip maximizer that runs on mako energy by co-opting
| Shinra Corp via a literal hostile takeover of the parent
| company of the lab that created him.
| freen wrote:
| Ted Chiang: "I tend to think that most fears about A.I. are best
| understood as fears about capitalism. And I think that this is
| actually true of most fears of technology, too. Most of our fears
| or anxieties about technology are best understood as fears or
| anxiety about how capitalism will use technology against us. And
| technology and capitalism have been so closely intertwined that
| it's hard to distinguish the two."
|
| https://www.nytimes.com/2021/03/30/podcasts/ezra-klein-podca...
___________________________________________________________________
(page generated 2025-05-25 23:02 UTC)