[HN Gopher] OracleGPT: Thought Experiment on an AI Powered Execu...
       ___________________________________________________________________
        
       OracleGPT: Thought Experiment on an AI Powered Executive
        
       Author : djwide
       Score  : 53 points
       Date   : 2026-01-26 15:06 UTC (18 hours ago)
        
 (HTM) web link (senteguard.com)
 (TXT) w3m dump (senteguard.com)
        
       | alanbernstein wrote:
       | Considering things like Palantir, and the doge effort running
       | through Musk, it seems inconceivable that this is not already the
       | case.
       | 
       | I think I'm more curious about the possibility of using a special
       | government LLM to implement direct democracy in a way that was
       | previously impossible: collecting the preferences of 100M
       | citizens, and synthesizing them into policy suggestions in a
       | coherent way. I'm not necessarily optimistic about the idea, but
       | it's a nice dream.
        
         | stewh_eng wrote:
         | Indirectly, this is kind of what I was trying to get at in this
         | weekend project https://github.com/stewhsource/GovernmentGPT
         | using the British commons debate history as a starting point to
         | capture divergent views from political affiliation, region and
         | role. Changes over time would be super interesting - but I
         | never had time to dig into that. Tldr; it worked surprisingly
         | well and I know a few students have picked it up to continue on
         | this theme in their research projects
        
           | bahmboo wrote:
           | That looks very interesting. Could use a demo or examples for
           | us short attention spanned individuals. Would be cool to feed
           | it into TTS or video generation like Sora.
        
         | djwide wrote:
         | Thanks for the comment. Interesting to think about but I am
         | also skeptical of who will be doing the "collecting" and
         | "synthesizing". Both tasks are potentially loaded with
         | political bias. Perhaps it's better than our current system
         | though.
        
         | Sheeny96 wrote:
         | Sounds like Helios https://www.youtube.com/watch?v=swbGrpfaaaM
        
         | Zagitta wrote:
         | Centralising it is definitely the wrong way to go about it.
         | 
         | It'd be much better to train an agent per citizen, that's in
         | their control, and have it participate in a direct democracy
         | setup.
        
         | ativzzz wrote:
         | > special government LLM to implement direct democracy
         | 
         | I like your optimism, but I think realistically a special
         | government LLM to implement authoritarianism is much more
         | likely.
         | 
         | In the end, someone has to enforce the things an LLM spits out.
         | Who does that? The people in charge. If you read any history,
         | the most likely scenario will be the people in charge guiding
         | the LLM to secure more power & wealth.
         | 
         | Now maybe it'll work for a while, depending on how good the
         | safeguards are. Every empire only works for a while. It's a fun
         | experiment
        
         | zozbot234 wrote:
         | Real world LLM's cannot even write a proper legal brief without
         | making stuff up, providing fake references and just spouting
         | all sorts of ludicrous nonsense. Expecting them to set policy
         | or even to provide effective suggestions to that effect is a
         | fool's errand.
        
           | pixl97 wrote:
           | >Real world politicians cannot even write a proper legal
           | brief without making stuff up, providing fake references and
           | just spouting all sorts of ludicrous nonsense. Expecting them
           | to set policy or even to provide effective suggestions to
           | that effect is a fool's errand.
           | 
           | This has been a more realistic experience of the average
           | American for the past few years.
        
       | mellosouls wrote:
       | This is an interesting and thoughtful article I think, but worth
       | evaluating in the context of the service ("cognitive security")
       | its author is trying to sell.
       | 
       | That's not to undermine the substance of the discussion on
       | political/constitutional risk under the inference-hoarding of
       | authority, but I think it would be useful to bear in mind the
       | author's commercial framing (or more charitably the motivation
       | for the service if this philosophical consideration preceded it).
       | 
       | A couple of arguments against the idea of singular control would
       | be that it requires technical experts to produce and manage it,
       | and would be distributed internationally given any countries
       | advanced enough would have their own versions; but it would of
       | course provide tricky questions for elected representatives in
       | the democratic countries to answer.
        
         | djwide wrote:
         | There's not a direct tie to what I'm trying to sell admittedly.
         | I just thought it was a worthwhile topic of discussion - it
         | doesn't need to be politically divisive and I might as well
         | post it on my company site.
         | 
         | I don't think there are easy answers to the questions I am
         | posing and any engineering solution would fall short. Thanks
         | for reading.
        
       | MengerSponge wrote:
       | A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER
       | MUST NEVER MAKE A MANAGEMENT DECISION.
        
         | toomuchtodo wrote:
         | While I have great respect for this piece of IBM literature, I
         | will also mention that most humans are not held accountable for
         | management decisions, so I suppose this idea was for a more
         | just world that does not exist.
        
           | lenerdenator wrote:
           | I'd say that the fix then is in creating a more just world
           | where leaders are held accountable than to hand it off to
           | something that, by its very nature, cannot be held
           | accountable.
        
           | skirge wrote:
           | human CAN and computer CAN NEVER
        
             | toomuchtodo wrote:
             | Accountability is perhaps irrelevant is my point. You can
             | turn off a computer, you can turn off a human. Is that
             | accountability? Accountability only exists if there are
             | consequences, and those consequences matter. What does it
             | mean for them to "matter"?
             | 
             | If accountability is taking ownership for mistakes and
             | correcting for improved future outcomes, certainly, I trust
             | the computer more than the human. We are never running out
             | of humans incurring harm within suboptimal systems that
             | continue to allow it.
        
         | deelayman wrote:
         | I wonder if that quote is still applicable to systems that are
         | hardwired to learn from decision outcomes and new information.
        
           | svieira wrote:
           | What (or who) would have been responsible for the Holodomor
           | if it had been caused by an automated system instead of
           | deliberate human action?
        
           | advisedwang wrote:
           | LLMs do not learn as they go in the same way people do.
           | People's brains are plastic and immediately adapt to new
           | information but for LLMs:
           | 
           | 1. Past decisions and outcomes get into the context window,
           | but that hasn't actually updated any model weights.
           | 
           | 2. Your interaction possible eventually gets into the
           | training data for a future LLM. But this is incredibly
           | diluted form of learning.
        
         | notpushkin wrote:
         | Let's assume we live in a hypothetical sane society, and
         | company owners and/or directors are responsible for their
         | actions through this entity. When they decide to delegate
         | management to an LLM, wouldn't they be held accountable for
         | whatever decisions it makes?
        
         | nilamo wrote:
         | Management is already never held accountable, so replacing them
         | is a net benefit.
        
         | unyttigfjelltol wrote:
         | Computers are _more_ accountable. You just pull the plug, wipe
         | the system.
         | 
         | Executives, in contrast, require option strike resets and
         | golden parachutes, no accountability.
         | 
         | Neither will tell you they erred or experience contrition, so
         | at a moral level there may well be some equivalency. :D
        
           | sifar wrote:
           | >> Computers are more accountable. You just pull the plug,
           | wipe the system.
           | 
           | I think you are anthropomorphizing here. How does a computer
           | feel when unplugged ? How would a computer take
           | responsibility for its' actions ?
        
       | blibble wrote:
       | think we're already there aren't we?
       | 
       | no human came out with those tariffs on penguin island
        
       | zozbot234 wrote:
       | The really nice thing about this proposal is that at least now we
       | can all stop anthropomorphizing Larry Ellison, and give Oracle
       | the properly robot-identifying CEO it deserves.
        
         | kmeisthax wrote:
         | But then we'd have to call it LawnmowerGPT
        
         | jeffrallen wrote:
         | I came here for this, am not disappoint. :)
         | 
         | Best meme in hacker space, thanks /u/Cantrill.
        
         | Terr_ wrote:
         | For those who haven't seen the reference:
         | https://www.youtube.com/watch?v=-zRN7XLCRhc&t=38m27s
        
       | alexpotato wrote:
       | You sometimes hear people say "I mean, we can't just give an AI a
       | bunch of money/important decisions and expect it to do ok" but
       | this is already happening and has been for years.
       | 
       | Examples:
       | 
       | - Algorithmic trading: I once embedded on an Options trading
       | desk. The head of desk mentioned that he didn't really know what
       | the PnL was during trading hours b/c the swings were so big that
       | only the computer algos knew if the decisions were correct.
       | 
       | - Autopilot: planes can now land themselves to an accuracy that
       | is so precise that the front landing gear wheels "thud" as they
       | go over the runway center markers.
       | 
       | and this has been true for at least 10 years.
       | 
       | In other words, if the above is possible then we are not far off
       | from some kind of "expert system" that runs a business unit
       | (which may be all robots or a mix of robots and people).
       | 
       | A great example of this is here: https://marshallbrain.com/manna1
       | 
       | EDIT: fixed some typos/left out words
        
         | mjr00 wrote:
         | > A great example of this is here:
         | https://marshallbrain.com/manna1
         | 
         | This is a piece of science fiction and has its own (inaccurate,
         | IMO) view on how minimum wage McDonald's employees would react
         | to a robot manager. Extrapolating this to real life is naive at
         | best.
        
           | pixl97 wrote:
           | >Extrapolating this to real life is naive at best.
           | 
           | Why, it's as much of a view of our past adherence to
           | technology without thinking as a well as a view of the
           | future.
           | 
           | "Computer says no" is a saying for a reason.
        
             | nirav72 wrote:
             | >"Computer says no" is a saying for a reason.
             | 
             | Current LLMs rarely or seldom say no. Unless, they're
             | specifically configured to block out certain types of
             | requests.
        
         | pavel_lishin wrote:
         | But none of those things are AI in the same sense that we use
         | the term now, to refer to LLMs.
        
           | alexpotato wrote:
           | But those things were considered on the same level of current
           | LLMs in the sense of "well, a computer might do part of my
           | job but not ALL of it".
           | 
           | No, algorithmic trading didn't replace everything a trader
           | did but it most certainly replaced large parts of the
           | workload and made it much faster and horizontally scalable.
        
             | exsomet wrote:
             | The two key differences to me are infrastructure and
             | specificity of purpose.
             | 
             | Autoland in plane requires a set of expensive, complex, and
             | highly fine-tuned equipment to be installed on every runway
             | in the world that enables it (which as a proportion is
             | statistically not a majority of them).
             | 
             | And as to specificity, this system does exactly one thing -
             | land a specific model of plane on a specific runway
             | equipped with instrumentation configured a specific way.
             | 
             | The point being: it isn't a magic wand. Any serious
             | conversation of AI in these types of life or death
             | situations has to recognize that without the corresponding
             | investment in infrastructure and specificity of purpose,
             | things like this blog post are essentially just science
             | fiction. The fact that previous generations of technology
             | considered autoland and algorithmic trading to be magic
             | doesn't really change anything about that.
        
             | happymellon wrote:
             | The problem here is that you are cherry picking examples of
             | successful technology.
             | 
             | The inverse would be to list off Theranos, Google Stadia,
             | and other failed tech and claim that people said that there
             | was massive steps that subsequently didn't materialise. In
             | fact a lot of times it was mostly fabricated by people with
             | stuff to gain from ripping off VCs.
             | 
             | Look at how bad it is with Microsoft in Windows despite
             | their "all in on AI".
             | 
             | Ultimately no one really knows how it will pan out, and if
             | we will end up with Enron or an Apple. Or even if it's a
             | combination of a successful tech that ultimately is
             | mishandled by corporations and fails, or a limited tech
             | that regardless captures the imagination through pop
             | culture and takes over.
        
         | djwide wrote:
         | I'm saying there's something structurally different form
         | autonomous systems generally and from an LLM corpus which has
         | all of the information in one place and at least in theory
         | extractable by one user.
        
         | kekqqq wrote:
         | I must say that the book is unrealistic, but it makes a good
         | sci-fi story. Thanks, I read it just now in 80 min.
        
         | Guvante wrote:
         | You gave examples of feedback loops.
         | 
         | We know very well how to train computers to handle those
         | effectively.
         | 
         | Anything without quick feedback is much more difficult to do
         | this way.
        
           | stoneforger wrote:
           | LLMs designing PID loops?
        
       | johnohara wrote:
       | > The President sits at the top of the classification hierarchy.
       | 
       | Constitutionally, and in theory as Commander-In-Chief, perhaps.
       | But in practice, it does not seem so. Worse yet, it's been
       | reported the current President doesn't even bother to read the
       | daily briefing as he doesn't trust it.
        
         | handedness wrote:
         | It's not an issue of theory-versus-practice.
         | 
         | You're conflating the classification system, established by EO
         | and therefore by definition controlled by the Executive, with
         | the classified products of intel agencies.
         | 
         | A particular POTUS's use (or lack thereof) of classified
         | information has no bearing on the nature of the classification
         | system.
        
         | SoftTalker wrote:
         | And the last president couldn't comprehend it.
         | 
         | <shrug>
        
         | djwide wrote:
         | I point that out a little bit when I refer to agencies being
         | discouraged from sharing information. The CIA may be worried
         | about losing HUMINT data to the NSA for example. You may be
         | referring to them worrying about compartmentalizing the
         | information away from the president as well which you are right
         | happens to some extent now but shouldn't 'in theory'. Maybe
         | it's a don't ask don't tell. I think Cheney blew the cover of
         | an intel asset though.
        
           | handedness wrote:
           | > compartmentalizing the information away from the president
           | as well which you are right happens to some extent now
           | 
           | This is nothing new, and has been happening since at least
           | the 1940s, to multiple administrations from both parties.
           | Roosevelt, Truman, Kennedy, Nixon, Reagan...and that's just
           | some of the instances which were publicly documented.
        
       ___________________________________________________________________
       (page generated 2026-01-27 10:01 UTC)