[HN Gopher] Philosophy Eats AI
       ___________________________________________________________________
        
       Philosophy Eats AI
        
       Author : robg
       Score  : 50 points
       Date   : 2025-01-19 18:49 UTC (4 hours ago)
        
 (HTM) web link (sloanreview.mit.edu)
 (TXT) w3m dump (sloanreview.mit.edu)
        
       | Terr_ wrote:
       | > what counts as knowledge (epistemology), and how AI represents
       | reality (ontology) also shape value creation.
       | 
       | As a skeptic with only a few drums to beat, my quasi-
       | philosophical complaint about LLMs: we have a rampant problem
       | where humans confuse a character they perceive out of a text-
       | document with a real-world author.
       | 
       | In all these hyped-products, you are actually being given the
       | "and then Mr. Robot said" lines from a kind of theater-script.
       | This document grows as your contribution is inserted as "Mr. User
       | says", plus whatever the LLM author calculates "fits next."
       | 
       | So all these excited articles about how SomethingAI has learned
       | deceit or self-interest? Nah, they're really probing how well it
       | assembles text (learned from ones we make) where we humans can
       | perceive a fictional character which exhibits those qualities.
       | That can including qualities we absolutely know the real-world
       | LLM does not have.
       | 
       | It's extremely impressive compared to where we used to be, but
       | not the same.
        
         | TJSomething wrote:
         | That's one of the things. Even in human-written fiction, the
         | depths of any character you read about is pure smoke and
         | mirrors. People regularly perceive fictional characters as if
         | they are real people (and it's fun to do so), but it would be
         | impossible for an author to simulate a complete human being in
         | their head.
         | 
         | It seems that LLMs operate a lot like I would in improv. In a
         | scene, I might add, "This is the fifth time you've driven your
         | car into a ditch this year." I don't know what the earlier four
         | times were like. No one there had any idea I was even going to
         | say that. I just say it as a method of increasing stakes and
         | creating the illusion of history in order to serve a narrative
         | purpose. I'll often include real facts to serve the
         | verisimilitude of a scene, but I don't have time to do real
         | fact checking. I need to keep the momentum going and will
         | gladly make up facts as a suits the narrative and my character.
        
           | exe34 wrote:
           | > it would be impossible for an author to simulate a complete
           | human being in their head.
           | 
           | unless it's a self-insert? or do you reckon even then it'll
           | be a lofi simulation, because there real world input is
           | absent and the physics/social aspect is still being
           | simulated?
        
             | jdietrich wrote:
             | Humans just aren't very good at understanding their own
             | motivations. Marketers know this implicitly. Almost nobody
             | believes "I drink Coca-Cola because billions of dollars of
             | advertising have conditioned me to associate Coke with
             | positive feelings on a subconscious level", even if they
             | would recognise that as a completely plausible explanation
             | for why _other people_ like Coca-Cola.
        
         | og_kalu wrote:
         | As long as it affects the real world, it doesn't matter what
         | semantical category you feel compelled to push LLMs into.
         | 
         | If Copilot will no longer reply helpfully because your previous
         | messages were rude then that is a consequence. It doesn't
         | matter whether it was "really upset" or not.
         | 
         | If some future VLM robot decides to take your hand off as some
         | revenge plot, that's a consequence. It doesn't matter if this
         | is some elaborate role play. It doesn't matter if the robot
         | "has no real identity" and "cannot act on real vengeance". Like
         | who cares ? Your hand is gone and it's not coming back.
         | 
         | Are there real world consequences ? Yes ? Then the handwringing
         | over whether it's just "elaborate science fiction" or "real
         | deceit" is entirely meaningless.
        
         | anxoo wrote:
         | > In all these hyped-products, you are actually being given the
         | "and then Mr. Robot said" lines from a kind of theater-script.
         | This document grows as your contribution is inserted as "Mr.
         | User says", plus whatever the LLM author calculates "fits
         | next."
         | 
         | and we are creating such a document now, where "Terr_" plays a
         | fictional character who is skeptical of LLM hype, and "anxoo"
         | roleplays a character who is concerned about the level of AI
         | capabilities.
         | 
         | you protest, "no, i'm a _real person_ with _real thoughts_! the
         | character is me! the AI  'character' is a a fiction created by
         | an ungodly pile of data and linear algebra!" and i reply, "you
         | are a fiction created by an ungodly mass of neuron activations
         | and hormones and neurotransmitters".
         | 
         | i agree that we cannot know what an LLM is "really thinking",
         | and when people say that the AIs have "learned how to [X]" or
         | have "demonstrated deception" or whatever, there's an
         | inevitable anthropomorphization. i agree that when people talk
         | to chatGPT and it acts "friendly and helpful", that we don't
         | really know whether the AI _is_ friendly and helpful, or
         | whether the  "mind" inside is some utterly alien thing.
         | 
         | the point is, none of that matters. if it writes code, it
         | writes code. if it's able to discover new scientific insights,
         | or if it's able to replace the workforce, or if it's able to
         | control and manipulate resources, those are all concrete things
         | it will do in the real world. to assume that it will never get
         | there because it's just playing a fancy language game is
         | completely unwarranted overconfidence.
        
       | kelseyfrog wrote:
       | Philosophy eats AI because we're in the exploration phase of the
       | s-curve and there's a whole bunch of VC money pumping into the
       | space. When we switch to an extraction regime, we can expect a
       | lot of these conversations to evaporate and replaced with, "what
       | makes us the most money" regardless of philosophic implication.
        
         | SequoiaHope wrote:
         | If that ever comes to pass. This is not guaranteed.
        
       | Sleaker wrote:
       | I'm confused on the premise that AI is eating software. What does
       | that even mean and what does it look like? AI is software, no?
        
         | MattPalmer1086 wrote:
         | The software that powers LLM inference is very small, and is
         | the same no matter what task you ask it to perform. LLMs are
         | really the neural architecture and model weights used.
        
         | jdietrich wrote:
         | There are a whole bunch of software problems where "just prompt
         | an LLM" is now a viable solution. Need to analyse some data?
         | You could program a solution, or you could just feed it to
         | ChatGPT with a prompt. Need to build a rough prototype for the
         | front-end of a web app? Again, you could write it yourself, or
         | you could just feed a sketch of the UI and a prompt to an LLM.
         | 
         | That might be a dead end, but a lot of people are betting a lot
         | of money that we're just at the beginning of a very steep
         | growth curve. It is now plausible that the future of software
         | might not be discrete apps with bespoke interfaces, but vast
         | general-purpose models that we interact with using natural
         | language and unstructured data. Rather than being written in
         | advance, software is extracted from the latent space of a model
         | on a just-in-time basis.
        
           | grey-area wrote:
           | A lot of the same people also recently bet huge amounts of
           | money that blockchains and crypto would replace the world's
           | financial system (and logistics and a hundred other
           | industries).
           | 
           | How did that work out?
        
             | jdietrich wrote:
             | A16z and Sequoia made some big crypto bets, but I don't
             | recall Google or Microsoft building new DCs for crypto
             | mining. There's a fundamental difference between VCs
             | throwing spaghetti against the wall and established tech
             | giants steering their own resources towards something.
        
         | Hoasi wrote:
         | It is even less meaningful than "software is eating the world".
         | But it sounds catchy, and people can remember it.
        
       | qrsjutsu wrote:
       | > The critical enterprise challenge is whether leaders will
       | possess the self-awareness and rigor to use philosophy as a
       | resource for creating value with AI
       | 
       | what the fuck. they haven't even done that with post 90's
       | technology in general and it's not only that no intelligent
       | person wants to work among them that they will fall just as short
       | with AI. I'm still grateful they are doing a job.
       | 
       | but please, a dying multitude right at your feet and all you need
       | to save - so you can learn even more from - them in your hands
       | and you scale images, build drones for cleaning at home and war
       | and imitate to replace people who love or need their jobs.
       | 
       | and faking all those AI gains - deceit, self-interest and what
       | not - is so ridiculously obvious just build-in linguistics that
       | can be read from a paper by someone who does not even speak that
       | language. it's "just" parameters and conditional logic, cool and
       | fancy and ready to eat up and digest almost any variation of user
       | input, but it's nowhere even close to intelligence, let alone
       | artificial intelligence.
       | 
       | philosophy eats nothing. there's those on all fours waiting for
       | whatever gives them status and recognition and those who,
       | thankfully, stay silent to not give those leaders more tools of
       | power.
        
       | initramfs wrote:
       | The world is becoming an algorithm.
       | 
       | https://en.wikipedia.org/wiki/The_Creepy_Line Algorithms create a
       | compression of search values not unlike a Cartesian plane.
       | 
       | The question is, will more people embrace the Cartesian
       | compression of ubiquitous internet communication?
        
         | rapnie wrote:
         | Isn't Nature like algorithms at work?
        
           | initramfs wrote:
           | yes and no. ones encoded by the previous generation, and
           | biological life is an open or partially open system
        
           | layer8 wrote:
           | You mean the journal? ;)
           | 
           | More seriously: An algorithm is discrete (consists of
           | discrete steps). Nature however appears to operate in a
           | continuous fashion.
        
       | polotics wrote:
       | I strongly disagree with the article on at least one point:
       | ontologies, as painstakingly hand-crafted jewels handed down from
       | aforementioned philosophers, are the complete opposite of what
       | LLM's are bottoming-up through their layers.
        
       | scoofy wrote:
       | I have multiple degrees in philosophy and I have no idea what
       | this article is even trying to say.
       | 
       | If anyone has access to the full article, I'm interested, but it
       | sounds like a lot of buzzwords and not a ton of substance.
       | 
       | The framing of ai through a philosophical lens is obviously
       | interesting, but a lot of the problems addressed in the intro are
       | pretty much irrelevant to the ai-ness of the information.
        
         | moffers wrote:
         | I was about to be very excited that my bachelors in Philosophy
         | might become relevant on its face for once in my life! But, I'm
         | not sure that flexing that professionally is going to get me at
         | the top of any neat AI projects.
         | 
         | But wouldn't that be great?
        
           | rvense wrote:
           | Once I'd started a new job and was asked to write "a little
           | bit" about myself for a slide for the first company meeting.
           | There were a couple of these because we were a bunch of new
           | people and my little bit was in a font like half the size of
           | all the others, because I have a humanities degree so I can
           | and will write something when you ask me to.
        
         | readyplayernull wrote:
         | The article is about mapping Philosophy into AI project
         | management.
         | 
         | > Philosophical perspectives on what AI models should achieve
         | (teleology), what counts as knowledge (epistemology), and how
         | AI represents reality (ontology) also shape value creation.
         | Without thoughtful and rigorous cultivation of philosophical
         | insight, organizations will fail to reap superior returns and
         | competitive advantage from their generative and predictive AI
         | investments.
        
           | rvense wrote:
           | Doesn't that hold for all other applications of software and
           | really technology? Without further context that just seems to
           | be saying you have to, like, think about what the AI is doing
           | and how you're applying it?
        
       | laptopdev wrote:
       | Is this available in full text anywhere without sign up?
        
       | redelbee wrote:
       | So we're back to the idea that only philosopher kings can shape
       | and rule the ideal world? Plato would be proud!
       | 
       | Jests aside, I love the idea of incorporating an all encompassing
       | AI philosophy built up from the rich history of thinking, wisdom,
       | and texts that already exist. I'm no expert, but I don't see how
       | this would even be possible. Could you train some LLM exclusively
       | on philosophical works, then prompt it to create a new perfect
       | philosophy that it will then use to direct its "life" from then
       | on? I can't imagine that would work in any way. It would
       | certainly be entertaining to see the results, however.
       | 
       | That said, AI companies would likely all benefit from a team of
       | philosophers on staff. I imagine most companies would. Thinking
       | deeply and critically has been proven to be enormously valuable
       | to humankind, but it seems to be of dubious value to capital and
       | those who live and die by it.
       | 
       | The fact that the majority of deep thinking and deep work of our
       | time serves mainly to feed the endless growth of capital -
       | instead of the well-being of humankind - is the great tragedy of
       | our time.
        
         | XorNot wrote:
         | What's the philosophy department at the local steel fabricator
         | contributing exactly?
        
           | apsurd wrote:
           | To ponder whether there's any value in doing anything beyond
           | maximizing steel fabrication output.
           | 
           | if it's absurd to you to think that a steel fabrication
           | company should care about anything other than fabricating
           | more steel, well that's your philosophy.
           | 
           | there are other philosophies.
        
         | Hammershaft wrote:
         | > The fact that the majority of deep thinking and deep work of
         | our time serves mainly to feed the endless growth of capital -
         | instead of the well-being of humankind - is the great tragedy
         | of our time.
         | 
         | I'm not blind to when this goes horribly wrong, or when needs
         | go unaddressed because they aren't profitable, but most of the
         | time these interests are unintentionally well aligned.
        
         | alganet wrote:
         | There is a lot of this "philosopher king" stuff. Prophets,
         | ubermenchs, tlatoanis. It seems foreign to the concept of
         | philosophy. As I see it, this comes more from the lineage of
         | arts than the lineage of thinkers (it's not a critic, just an
         | observation).
         | 
         | I think this is very obvious and both artists and philosophers
         | understand it.
         | 
         | I'm worried about the mercantilist guild. They don't seem to
         | get the message. Maybe I'm wrong, I don't really know much
         | about what they think. Their actions show disgerard for the
         | other two guilds.
        
       | Onavo wrote:
       | No paywall
       | 
       | https://tribunecontentagency.com/article/philosophy-eats-ai/
        
       | alganet wrote:
       | Philosophy is mostly autophagous and self-regulating, I think.
       | It's a debug mode, or something like it.
       | 
       | It's not eating AI. It's "eating" the part of AI that was tuned
       | to disproportionally change the natural balance of philosophy.
       | 
       | Trying to get on top of it is silly. The debug mode is not for
       | sale.
        
       | treksis wrote:
       | wishful article
        
       | treksis wrote:
       | wishful
        
       | antonkar wrote:
       | How can you create an all-understanding all-powerful jinn that is
       | a slave in a lamp? Can the jinn be all-good, too? What is good
       | anyways? What should we do if doing good turns out to be
       | understanding and freeing others (at least as a long-term goal)?
       | Should our AI systems gradually become more censoring or more
       | freeing?
        
       | tomlockwood wrote:
       | Philosophy postgrad and now long time programmer here!
       | 
       | This article makes a revelation of the pretty trivially true
       | claim that philosophy _is_ an undercurrent of thought. If you
       | ask, why do we do science, the answer is philosophical.
       | 
       | But the mistake many philosophers make is extrapolating
       | philosophy being a discipline that reveals itself when
       | fundamental questions about an activity are asked, into a belief
       | that philosophy, as a discipline, is _necessary_ to that
       | activity.
       | 
       | AI doesn't require an understanding of philosophy any more than
       | science does. Philosophers may argue that people always wonder
       | about philosophical things, like, as the article says, teleology,
       | epistemology and ontology, but that relation doesn't require an
       | understanding of the theory. A scientist doesn't need to know any
       | of those words to do science. Arguably, a scientist _ought_ to
       | know, but they don 't have to.
       | 
       | The article implies that AI leaders are currently _ignoring_
       | philosophy, but it isn 't clear to me what ignoring the all-
       | pervasive substratum of thought would _look like_. What would it
       | look like for a person not to think about the meaning of it all,
       | at least once at 3am at a glass outdoor set in a backyard? And,
       | the article doesn 't really stick the landing on why bringing
       | those thoughts to the forefront would mean philsophy will "eat"
       | AI. No argument from me against philosophy though, I think a
       | sprinkling of it is useful, but a lack of philosophy theory is
       | not an obstacle to action, programming, creating systems that
       | evaluate things, see: almost everyone.
        
       ___________________________________________________________________
       (page generated 2025-01-19 23:01 UTC)