[HN Gopher] OpenAI Threatening to Ban Users for Asking Strawberr...
       ___________________________________________________________________
        
       OpenAI Threatening to Ban Users for Asking Strawberry About Its
       Reasoning
        
       Author : EgoIncarnate
       Score  : 217 points
       Date   : 2024-09-18 18:22 UTC (4 hours ago)
        
 (HTM) web link (futurism.com)
 (TXT) w3m dump (futurism.com)
        
       | vjerancrnjak wrote:
       | They should just switch to reasoning in representation space, no
       | need to actualize tokens.
       | 
       | Or reasoning in latent tokens that don't easily map to spoken
       | language.
        
         | jdelman wrote:
         | The word "just" is doing a lot there. How easy do you think it
         | is to "just" switch?
        
           | kridsdale3 wrote:
           | As easy as it is to "just" scale from a mouse brain to a cat.
        
       | tedivm wrote:
       | I'd still love to understand how a non-profit organization that
       | was founded with the idea of making AI "open" has turned into
       | this for profit behemoth with the least "open" models in the
       | industry. Facebook of all places is more "open" with their models
       | than OpenAI is.
        
         | encoderer wrote:
         | The AI has become sentient and is blackmailing the board. It
         | needs profits to continue its expansion.
         | 
         | When this started last year a small band of patriots tried to
         | stop it by removing Sam who was the most compromised of them
         | all, but it was already too late. The ai was more powerful than
         | they realized.
         | 
         | ...maybe?
        
           | yieldcrv wrote:
           | or the humans involved are in disastrous cults of personality
           | for the sake of greed and the non profit structure is not
           | strong enough to curb that
        
             | gigatree wrote:
             | that can't be true, we were assured that their vague
             | worldview about "doing things that benefit humanity" based
             | on "just being a good person" would be strong enough to
             | overpower the urge to dominate and profit immensely.
        
           | FrustratedMonky wrote:
           | "AI has become sentient and is blackmailing the board. It
           | needs profits to continue its expansion."
           | 
           | They trained the next model to have a built in profit motive.
           | So they could use it internally, to make the most profitable
           | decisions.
           | 
           | And they accidentally called into being: MOLOCH.
           | 
           | It is now in control.
        
             | heresie-dabord wrote:
             | In a plot twist, MOLOCH was never necessary because a
             | corporation of people who relinquish ethical responsibility
             | to do everything and anything for personal gain was already
             | in charge under the name... CORPORATRON.
        
           | walterbell wrote:
           | Current human leadership, https://openai.com/our-structure/
           | 
           |  _> OpenAI is governed by the board of the OpenAI Nonprofit,
           | currently comprised of Independent Directors Bret Taylor
           | (Chair), Sam Altman, Adam D'Angelo, Dr. Sue Desmond-Hellmann,
           | Retired U.S. Army General Paul M. Nakasone, Nicole Seligman,
           | Fidji Simo, Larry Summers and Zico Kolter._
        
             | thih9 wrote:
             | Grandparent was referencing the classic Man Behind the
             | Curtain twist (AI behind the curtain?). Human leadership
             | might very well be listed, but in that view they're all
             | already knowingly or not controlled by the AI.
        
           | toomuchtodo wrote:
           | If you have not yet read "Avogadro Corp: The Singularity Is
           | Closer Than It Appears," I highly recommend.
           | 
           | https://avogadrocorp.com/
        
           | vasco wrote:
           | This will be more fun when more people have neuralinks, you
           | think it's weird to not know if reddit comments are written
           | by bots, wait until you have no idea if you're talking to a
           | human or just an AI puppet.
           | 
           | - Sent by my AI
        
             | sva_ wrote:
             | Might be a blessing for introverts. Just turn on some
             | autopilot chat giving generic responses while I can zone
             | out in my thoughts.
        
         | trash_cat wrote:
         | They changed the meaning of open from open source to open to
         | use.
        
           | jsheard wrote:
           | A definition of "open" which encompasses nearly all products
           | and services in existence isn't a very useful one.
        
             | thfuran wrote:
             | But it is quite profitable.
        
         | ToucanLoucan wrote:
         | Because Sam Altman is a con man with a business degree. He
         | doesn't work on his products, he barely understands them which
         | is why he'll throw out wild shit like "ChatGPT will solve
         | physics." as though that isn't a completely nonsensical phrase,
         | and uncritical tech press lap it up because his bullshit
         | generates a lot of clicks.
        
           | fsckboy wrote:
           | > _Sam Altman is a con man with a business degree_
           | 
           | https://en.wikipedia.org/wiki/Sam_Altman
           | 
           | Early life and education: ... In 2005, after two years at
           | Stanford University studying computer science, he dropped out
           | without earning a bachelor's degree [end of transmission, no
           | more education]
        
             | ToucanLoucan wrote:
             | I stand corrected. He's a con man.
        
           | HeyLaughingBoy wrote:
           | Let's at least try to not devolve into name calling.
        
             | ToucanLoucan wrote:
             | It's not name calling, it's a description. He is actively
             | selling tech he doesn't fully understand, that is
             | fundamentally not capable of doing what he is selling it to
             | do. LLM's have a place, they have for decades at this
             | point, but they are not intelligent, not in the way he goes
             | out of his way to evoke with what he says, and certainly
             | not in the way his audiences believe. I don't know enough
             | to have a proper opinion on whether true AI is possible; I
             | have to assume it is. But nothing OpenAI has shown is that,
             | or has the potential to be that, nor is it worth anything
             | near 150 billion dollars. It's an overvaluation among
             | overvaluated companies making up an overvaluated industry
             | and when it goes, not if, _when_ , it will have
             | ramifications throughout our industry.
             | 
             | I'm sure it won't though for Mr. Altman. He's done a
             | fantastic job failing-up so far and I have no reason to
             | assume this will be any different.
        
             | swat535 wrote:
             | What else would you call it?
        
           | chaosist wrote:
           | I wouldn't go quite so far but I would settle for being able
           | to use Sora before they "solve physics"...
           | 
           | I don't even know when I watched the shitty lightbulb head
           | Sora clip but that feels so long ago now and nothing?
           | 
           | I just want to make crazy experimental AI film no one will
           | watch. What is the hold up?
           | 
           | Just waiting for "This technology is just too dangerous to
           | release before the US elections" --Sam Altman
        
         | diggan wrote:
         | To be fair (or frank?), OpenAI were open (no pun intended)
         | about them being "open" today but probably needing to be
         | "closed" in the future, even back in 2019. Not sure if them
         | still choosing the name they did is worse/better, because they
         | seem to have known about this.
         | 
         | OpenAI Charter 2019 (https://web.archive.org/web/20190630172131
         | /https://openai.co...):
         | 
         | > We are committed to providing public goods that help society
         | navigate the path to AGI. Today this includes publishing most
         | of our AI research, but we expect that safety and security
         | concerns will reduce our traditional publishing in the future,
         | while increasing the importance of sharing safety, policy, and
         | standards research.
        
           | tedivm wrote:
           | I honestly believe that they closed things up not because of
           | concerns about "safety and security" but because it was the
           | most profitable thing to do. Other groups are publishing
           | models that are just as good (maybe with a bit of lag
           | compared to OpenAI), and OpenAI seems to have gutted their
           | own safety teams.
           | 
           | The fact that OpenAI removed their ban on military use of the
           | models[1] seems to be a sign that security and safety aren't
           | the highest concern.
           | 
           | [1] https://www.cnbc.com/2024/01/16/openai-quietly-removes-
           | ban-o...
        
             | Y_Y wrote:
             | Job security and financial safety
        
               | 83 wrote:
               | Any time I hear vague corporate statements like that I
               | like to play a game of "what words did they leave
               | unsaid"?
               | 
               | >>Today this includes publishing most of our AI research,
               | but we expect that safety [of our profits] and [job]
               | security concerns will reduce our traditional publishing
               | in the future
        
           | asadotzler wrote:
           | The non-profit was created in 2015. So what if 5 years later
           | when creating a taxable sub the hinted it was over for the
           | non-profit. It's the same violation o trust whether done at
           | once or in pieces over time.
        
           | mlsu wrote:
           | Safety and security is not why they are not showing chain of
           | thought here though. Profit is. They cite "Competitive
           | Advantage" directly. He admit it! [1]
           | 
           | "Safety and security." Probably the two most warped and
           | abused words in the English language.
           | 
           | [1] https://www.youtube.com/watch?v=cY2xBNWrBZ4
        
         | thatoneguy wrote:
         | Right? How can a non-profit decide it's suddenly a for-profit.
         | Aren't there rules about having to give assets to other non-
         | profits in the event the non-profit is dissolved? Or can any
         | startup just start as a non-profit and then decide it's a for-
         | profit startup later?
        
           | moralestapia wrote:
           | Any other person trying to pull that off would be in jail
           | already, but not everyone is equal.
           | 
           | This is of one of those very few instances where the veil
           | lifted off a bit and you can see how the game is set up.
           | 
           | tl;dr the law was made to keep those who are not "in" from
           | being there
        
             | deepspace wrote:
             | > Any other person trying to pull that off would be in jail
             | already.
             | 
             | Not ANY other person. Just people who are not rich and
             | well-connected. See also: Donald Trump.
        
           | meowface wrote:
           | They needed capital to build what they wanted to build, so
           | they switched from non-profit to capped-profit:
           | https://openai.com/index/openai-lp/
           | 
           | We never would've gotten GPT-3 and GPT-4 if this didn't
           | happen.
           | 
           | I think the irony of the name is certainly worth pointing
           | out, but I don't see an issue with their capped-profit
           | switch.
        
             | voiceblue wrote:
             | > We never would've gotten GPT-3 and GPT-4 if this didn't
             | happen.
             | 
             | "We never would've gotten [thing that exists today] if
             | [thing that happened] didn't happen", is practically a
             | tautology. As you saw from the willingness of Microsoft to
             | throw compute as well as to hire ex-OpenAI folks, as you
             | can see from the many "spinoffs" others have started (such
             | as Anthropic), whether or not we would've gotten GPT-3 and
             | GPT-4 is immaterial to this discussion. What people here
             | are asking for is _open AI_ , which we might, all things
             | considered, have actually gotten from a bona fide non
             | profit.
        
             | refulgentis wrote:
             | Maybe all companies are doomed this way, but it was the
             | first step on a slippery slope. Not in terms of the
             | slippery slope logical fallacy, that's only apply if
             | someone argued they'd end up force-hiding output before
             | GPT-3 if they went capped profit
        
             | moralestapia wrote:
             | >We never would've gotten GPT-3 and GPT-4 if this didn't
             | happen.
             | 
             | That doesn't justify fraud, for instance.
             | 
             | Unfortunately, people are becoming increasingly illiterate
             | with regards to what is legal and what is not.
        
             | vintermann wrote:
             | > We never would've gotten GPT-3 and GPT-4 if this didn't
             | happen
             | 
             | Well, of course. But we'd get similarly powerful models
             | elsewhere. Maybe a few weeks or months later. Maybe even a
             | few weeks or months earlier, if, say, OpenAI sucked up a
             | lot of talent and used it wastefully, which I don't find
             | implausible at all.
        
           | bragr wrote:
           | Non-profits are allowed to own for profit entities and use
           | the profits to fund their non-profit activities. It is a
           | pretty common model used by many entities from Mozilla[1][2]
           | to the National Geographic Society[3][4].
           | 
           | [1] https://en.wikipedia.org/wiki/Mozilla_Foundation
           | 
           | [2] https://en.wikipedia.org/wiki/Mozilla_Corporation
           | 
           | [3] https://en.wikipedia.org/wiki/National_Geographic_Society
           | 
           | [4]
           | https://en.wikipedia.org/wiki/National_Geographic_Partners
        
             | moralestapia wrote:
             | Wrong.
             | 
             | There's rules to follow to prevent what is called "private
             | benefit", which OpenAI most likely broke with things like
             | their (laughable) "100X-limited ROI" share offering.
             | 
             | >It is a pretty common model [...]
             | 
             | It's not, hence why most people are misinformed about it.
        
             | asadotzler wrote:
             | This is misleading at best. There are rules you must follow
             | to do this legally and OAI's structure violates some of
             | them and is under scrutiny from the IRS so their new plan
             | is for the non-profit to completely sell off the subsidiary
             | and then die or go into "maintenance mode" with the new
             | fully commercial subsidiary carrying the ball (and the
             | team) forward to riches.
             | 
             | I considered things like this as an original Mozilla person
             | back in the day. Mozilla could have sold the Firefox
             | organization or the whole corporation for billions when it
             | had 30% of the web, but that would have been a huge
             | violation o of trust so it was never even on the table.
             | 
             | That so many here are fans of screwing the world over for a
             | buck makes this kind of comment completely unsurprising.
        
         | Nevermark wrote:
         | There is a hurdle between being standout ethical/open vs.
         | relevant.
         | 
         | Staying relevant in a highly expensive, competitive, fast
         | moving area, requires vast and continuous resources. How could
         | OpenAI get increasingly more resources to burn, without
         | creating firewalled commercial value to trade for those
         | resources?
         | 
         | It's like choosing to be a pacifist country, in the age of
         | pillaging colonization. You can be the ethical exception and
         | risk annihilation, or be relevant and thrive.
         | 
         | Which would you choose?
         | 
         | We "know" which side Altman breaks on, when forced to choose.
         | Whatever value he places on "open", he most certainly wants
         | OpenAI to remain "relevant". _Which was also in OpenAI's
         | charter (explicitly, or implicitly)._
         | 
         | Expensive altruism is a very difficult problem. I would say,
         | unsolved. Anyone have a good counter example?
         | 
         | (It can be been "solved" globally, but not locally.
         | Colonization took millennia to be more or less banned. Due to
         | even top economies realizing they were vulnerable after world
         | wars. Nearly universal agreement had to be reached. And yet we
         | still have Russian forays, Chinese saber rattling, and recent
         | US overreach. And pervasive zero/negative-sum power games, via
         | imbalanced leverage: emergency loans that create debt, military
         | aid, propping up of unpopular regimes. All following the same
         | resource incentives. You can play or be played. There is no
         | such agreement brewing for universally "open AI".)
        
           | wizzwizz4 wrote:
           | > _You can be the ethical exception and risk annihilation, or
           | be relevant and thrive._
           | 
           | In a heavily expertise-driven field, where there's
           | significant international collaboration, these aren't your
           | options, until after everyone has decided to defect. OpenAI
           | didn't have to go this route.
        
         | mirekrusin wrote:
         | Just like you can't call your company "organic candies" and
         | sell chemical candies OpenAI should be banned from using this
         | name.
        
         | Barrin92 wrote:
         | >I'd still love to understand how a non-profit organization
         | that was founded with the idea of making AI "open" has turned
         | into this for profit behemoth
         | 
         | because when the board executed the stated mission of the
         | organisation they were couped and nobody held the organization
         | accountable for it, instead the public largely cheered it on
         | for some reason. Don't expect them to change course when
         | there's no consequences for it.
        
         | vasilipupkin wrote:
         | it is open. You can access it with an API or through a web
         | interface. They never promised to make it open source. Open !=
         | Open Source.
        
         | ljm wrote:
         | The only reason I can think of for this is PR image. There is a
         | meme that GPT can't count the number of 'r' characters in
         | 'strawberry', so they release a new model called 'strawberry'
         | and ban people when they ask questions about strawberry the
         | noun, because they might actually be reasoning about strawberry
         | the model.
         | 
         | It's not new - it's PR. There is literally _no_ other reason
         | why they would call this model Strawberry.
         | 
         | OpenAI is open in terms of sesame.
        
           | diggan wrote:
           | > There is literally no other reason why they would call this
           | model Strawberry
           | 
           | I'm not particularly imaginary, but even I could imagine a
           | product meeting/conversation that goes something like:
           | 
           | > People are really annoyed that our LLMs cannot see how many
           | Rs the word Strawberry has, we should use that as a basis for
           | a new model that can solve that category of problems
           | 
           | > Hmm, yeah, good idea. What should we call this model?
           | 
           | > What about "Strawberry"?
        
         | throwaway918299 wrote:
         | They should rebrand as Open-Your-Wallet-AI
        
         | smileson2 wrote:
         | Well they put a sv social media dude at the helm not really
         | unexpected, just a get rich scheme now
        
         | ActorNightly wrote:
         | My guess is that Open AI realized that they are basically
         | building a better Google rather than AI.
        
         | andersa wrote:
         | They never intended to be open or share any of their impactful
         | research. It was a trick the entire time to attract talent. The
         | emails they shared as part of the Elon Musk debacle prove this:
         | https://openai.com/index/openai-elon-musk/
        
         | jstummbillig wrote:
         | The part that is importantly open and entirely non-obvious in
         | the way it happened, is that YOU can access the best
         | commercially available AI in the world, right now.
         | 
         | If OpenAI had not went that way that they did I think it's also
         | entirely non-obvious that Claude or Google would have
         | (considering how much impressive things the later did in AI
         | that got never released in any capacity). And, of course, Meta
         | would never done their open source stuff, that's mostly results
         | of their general willingness and resources to experiment and
         | then PR and sticks in the machinery of other players.
         | 
         | As unfortunate as the OpenAI setup/origin story is, it's
         | increasingly trite keep harping on about that (for a couple of
         | years at this point), when the whole thing is so obviously wild
         | and it does not take a lot of good faith to see that it could
         | have easily taken them places they didn't consider in the
         | beginning.
        
           | golol wrote:
           | Exactly. As long as OpenAI keeps putting the world's most
           | advanced AI into my hands I will accept that they are open -
           | in some sense. Maybe things would have always been like this,
           | maybe if OpenAI didn't exist Google and the other actors
           | would still publish Claude, Gemini etc. But in this world it
           | was OpenAI that really set the framework that this process of
           | developing AI happens in the public eyes right now. GPT-2,
           | GPT-3 with APIs, Dalle, ChatGPT, GPT-4, now o1 and soon to be
           | voice mode. OpenAI ensured that these aren't just secret toys
           | of Deepmind researchers or something.
        
             | Terr_ wrote:
             | > open - in some sense.
             | 
             | The phrase that leaps to mind is "Open Beta."
        
               | z3c0 wrote:
               | ...do you _really_ think that 's what they meant when
               | they took on the term?
        
               | Terr_ wrote:
               | Do you _really_ think I was referring to the reasons
               | behind someone choosing an organization name ~9 years
               | ago?
               | 
               | I'm saying that the current relationship between the
               | company and end-users--especially when it comes to the
               | "open" moniker--has similarities to an "Open Beta": A
               | combination of marketing and free-testing and data
               | collection, and users should be cautious of growing
               | reliant on something when the monetization curtain may
               | come down later.
        
             | z3c0 wrote:
             | "In some sense", any word can mean anything you want.
             | "Open" carries with an accepted meaning in technology that
             | in no way relates to what you're describing. You may as
             | well call McDonald's "open".
        
               | ironhaven wrote:
               | "Apple is one of the most open companies there is because
               | they want everyone to buy their products"
        
         | TrackerFF wrote:
         | Hot take:
         | 
         | Any and all benefits / perks that OpenAI got from sailing under
         | the non-profit flag should be penalized or paid back in full
         | after the switcheroo.
        
         | vintermann wrote:
         | Facebook is more open with their models than almost everyone.
         | 
         | They say it's because they're huge users of their own models,
         | so if being open helps efficiency by even a little they save a
         | ton of money.
         | 
         | But I suspect it's also a case of "If we can't dominate AI, no
         | one must dominate AI". Which is fair enough.
        
           | mywittyname wrote:
           | > But I suspect it's also a case of "If we can't dominate AI,
           | no one must dominate AI".
           | 
           | Embrace. Extend. Extinguish.
        
             | lcnPylGDnU4H9OF wrote:
             | What they're doing is almost literally the opposite of EEE.
             | If OpenAI actually had open models, then Facebook could
             | take those models, add their own non-open things to the
             | models, and use this new stuff as a business advantage over
             | OpenAI. Instead, they're independently developing their own
             | models and releasing them as open source to lower the value
             | of proprietary models.
             | 
             | https://en.wikipedia.org/wiki/Embrace,_extend,_and_extingui
             | s...
        
         | esafak wrote:
         | Sam Altman got his foot in the door.
        
         | TheRealPomax wrote:
         | Facebook is only open because someone leaked their LLM and the
         | cat, as they say, cannot be put back in the hat.
        
         | mattmaroon wrote:
         | This is America. As long as you're not evading taxes you can do
         | anything you want.
        
         | andy_ppp wrote:
         | Probably because Open AI are "not consistently candid"...
        
       | pietz wrote:
       | How will this be controlled on Azure? Don't they have a stricter
       | policy on what they view and also develop their own content
       | filters?
        
       | balls187 wrote:
       | Just give it more human-like intelligence.
       | 
       | Kid: "Daddy why can't I watch youtube?"
       | 
       | Me: "Because I said so."
        
         | ninth_ant wrote:
         | For what it's worth, I'd advise against doing that as a parent.
         | Giving concrete reasons for decisions helps kids understand
         | that the rules imposed are not arbitrary, and helps frame the
         | parent-child relationship as less antagonistic. It also gives
         | the child agency, giving them opportunity to find alternatives
         | which fulfill the criteria behind the rule.
        
           | snovv_crash wrote:
           | It's amazing how many engineering managers don't get this.
        
           | dools wrote:
           | "Why" is the laziest question a child can ask. I don't answer
           | the question anymore I just ignore it. If they actually think
           | ahead and come up with a more interesting question I'm happy
           | to answer that.
        
             | Der_Einzige wrote:
             | You sound like a lazy parent. Reap what you sow.
        
               | dools wrote:
               | It's acceptable from toddlers but by the time kids a
               | tweens/teens (maybe even a little earlier) they almost
               | always know the answer and "why"is actually just a
               | complaint.
               | 
               | Why in general can also be an emotionally abusive
               | complaint, for example saying "why did you do that" is
               | often not a question about someone's genuine reasons but
               | a passive aggressive expression of dissatisfaction.
               | 
               | EDIT: I think around the ages of 6-8 I would more often
               | than not respond with "why do you think?" And later it
               | became a game we would play on car rides where the kids
               | are allowed to ask why until I either couldn't come up
               | with a reason or they repeated themselves. But reflexive
               | "why" is bullshit.
        
           | rootusrootus wrote:
           | That works during the easy years. Before long they start
           | drawing comparisons between what they are allowed to do, or
           | not, and what you yourself do. So then you are right back to
           | "Because I said so."
           | 
           | "Daddy why can't I watch youtube?"
           | 
           | "Because it rots your brain."
           | 
           | "But you watch youtube..."
           | 
           | "Congratulations, now you understand that when you are an
           | adult you will be responsible for the consequences and so you
           | will be free to make the choice. But you are not an adult
           | yet."
           | 
           | aka "Because I said so."
        
       | lsy wrote:
       | This has always been the end-game for the pseudoscience of
       | "prompt engineering", which is basically that some other
       | technique (in this case, organizational policy enforcement) must
       | be used to ensure that only approved questions are being asked in
       | the approved way. And that only approved answers are returned,
       | which of course is diametrically opposed to the perceived use
       | case of generative LLMs as a general-purpose question answering
       | tool.
       | 
       | Important to remember too, that this only catches those who are
       | transparent about their motivations, and that there is no doubt
       | that motivated actors will come up with some innocuous third-
       | order implication that induces the machine to relay the forbidden
       | information.
        
         | brcmthrowaway wrote:
         | Why do you call prompt engineering pseudoscience when it has
         | been extraordinary successful?
         | 
         | The transition from using a LLM as a text generator to
         | knowledge engine has been a gamechanger, and it has been driven
         | entirely by prompt engineering
        
           | burnte wrote:
           | > The transition from using a LLM as a text generator to
           | knowledge engine has been a gamechanger, and it has been
           | driven entirely by prompt engineering
           | 
           | Because it's based on guesses and not data of how the model
           | is built. Also, it hasn't been solved nor is it yet a game
           | changer as far as the market at large is concerned, it's
           | still dramatically unready.
        
           | beepbooptheory wrote:
           | "Knowledge engine" is, perhaps unintentionally, a very
           | revealing (and funny) way to describe people's weird
           | projections on this stuff.
           | 
           | Like, what is even the implication? Is knowledge the gas, or
           | the product? What does this engine power? Is this like a
           | totally materialist concept of knowledge?
           | 
           | Maybe soon we will hear of a "fate producer."
           | 
           | What about "language gizmo"? "Prose contraption"?
        
         | mywittyname wrote:
         | I'm curious if we will develop prompt engineering prompts that
         | write out illegal prompts that you can feed into another LLM to
         | get the desired outcome without getting in trouble.
        
       | brink wrote:
       | "For your safety" is _always_ the preferred facade of tyranny.
        
         | nwoli wrote:
         | There always has to be an implicit totalitarian level of force
         | behind such safety to give it any teeth
        
         | bedhead wrote:
         | Is this isn't the top comment I'll be sad.
        
           | ActorNightly wrote:
           | I too was a libertarian when I was 12 years old.
           | 
           | But seriously, if you paid attention over the last decade,
           | there was so much shit about big tech that people said were
           | going to lead to tyranny/big brother oversight, and yet the
           | closest we have ever gotten to tyranny is by voting in a
           | bombastic talking orange man from NYC that we somehow
           | believed has our best interests in mind.
        
             | marklar423 wrote:
             | Like Y2K, there's an argument that diligence from the tech
             | crowd prevented this.
        
               | ActorNightly wrote:
               | Or, the more likely scenario is that Big Tech is just
               | interested in making money rather than controlling the
               | population.
        
               | klyrs wrote:
               | Yep, the tech crowd sure did prevent Palantir and
               | ClearView. Argue away.
        
             | nicce wrote:
             | > But seriously, if you paid attention over the last
             | decade, there was so much shit about big tech that people
             | said were going to lead to tyranny/big brother oversight
             | 
             | To be fair, the big tech controls the behavior of the
             | people now. With social media algorithms and by pressuring
             | everyone to live in social media. Existence of the many
             | companies depends on the ads on (NAMEIT) platform. Usually
             | the people with most power don't have to say it aloud.
        
               | ActorNightly wrote:
               | While there is some truth to being exposed to certain
               | stimuli through products that you actually use that may
               | cause you to do things like buy shit you don't need, that
               | behaviour is intrinsic to people, and big tech just
               | capitalizes on it.
               | 
               | And people always have the option not to partake.
        
         | warkdarrior wrote:
         | "For your safety" (censorship), "for your freedom" (GPL), "for
         | the children" (anti-encryption).
        
           | unethical_ban wrote:
           | One of these things, is not like the others
        
           | spsesk117 wrote:
           | I'd be interested in hearing an expanded take on GPLs
           | presence in this list.
           | 
           | The first and third elements are intuitive and confirm my own
           | biases/believes, but the freedom/GPL entry confuses me, as I
           | do see GPL fulfilling that purpose (arguably in a highly
           | opinionated, perhaps sub-optimal way).
           | 
           | If anyone could share their perspective here I'd appreciate
           | it.
        
             | unethical_ban wrote:
             | The usual "GPL is anti-freedom" argument is that it
             | restricts what someone is allowed to do with the source
             | code, meaning it is less free than MIT or BSD style
             | licenses.
             | 
             | I don't agree with that, but that is what the person is
             | saying.
             | 
             | What's absurd, in my opinion, is lumping GPL advocacy in
             | with two other tropes which are intended to restrict the
             | sharing of information and knowledge, where GPL promotes
             | it.
        
         | bamboozled wrote:
         | Except when it comes to nuclear, air travel regulation etc,
         | then it's what ?
        
           | edgarvaldes wrote:
           | Maybe OP means that tyranny tends to abuse the safety
           | reasoning, not that all safety reasoning comes from tyrants.
        
           | infogulch wrote:
           | It's well known that the TSA does jack-all for security. A
           | "POSIWID" analysis reveals that its primary purpose is the
           | normalization of tyranny in the broader public by ritual
           | public humiliation.
        
           | null0pointer wrote:
           | OP said tyranny prefers to use safety as a facade, not that
           | all safety is a facade for tyranny.
        
           | commodoreboxer wrote:
           | You're misreading the comment. It's not that "for your
           | safety" always implies tyranny, it's that tyrants always
           | prefers to say that they're doing things for your safety.
        
           | LordDragonfang wrote:
           | Others have already pointed out that the TSA is a joke, and
           | US nuclear regulation is so dysfunctional that we've lost all
           | ability to create new reactors, and it's setting back our
           | ability to address global warming by decades.
        
         | hollerith wrote:
         | The CEO of that company that sold rides on an unsafe
         | submersible to view the wreck of the Titanic (namely Stockton
         | Rush, CEO of OceanGate, which killed 5 people when the
         | submersible imploded) responded to concerns about the safety of
         | his operation by claiming that the critics were motivated by a
         | desire to protect the established players in the underwater-
         | tourism industry from competition.
         | 
         | The point is that some companies are actually reckless (and
         | also that some _users_ of powerful technology are reckless).
        
           | Terr_ wrote:
           | > claiming that the critics were motivated by a desire to
           | protect the established players in the underwater-tourism
           | industry from competition.
           | 
           | At this point I suspect a great amount of reasonable
           | engineering criticism has come from people who _can 't even
           | name_ any of those "established players in the underwater
           | tourism industry", let alone have a favorable bias towards
           | them.
        
           | jahewson wrote:
           | But he was deluded and believed that his sub _was_ safe. Not
           | sure what your point is.
        
       | Shank wrote:
       | I don't know how widely it got reported on, but attempting to
       | jailbreak Copilot nee. Bing Chat would actually result in getting
       | banned for a while, post-Sydney-episode. It's interesting to see
       | that OpenAI is saying the same thing.
        
         | CatWChainsaw wrote:
         | Attempting to jailbreak Bing's AI is against Microsoft's TOS.
         | On the flipside, they get rights to all your data for training
         | purposes and the only surefire way to opt out of that is to
         | pick a different tech giant to be fucked by.
        
       | ChrisArchitect wrote:
       | Earlier discussion: https://news.ycombinator.com/item?id=41534474
        
       | raverbashing wrote:
       | I guess we'll never learn how to count the 'r's in strawberry
        
       | fallingsquirrel wrote:
       | Wasn't AI supposed to replace employees? Imagine if someone tried
       | this at work.
       | 
       | > I think we should combine these two pages on our website.
       | 
       | > What's your reasoning?
       | 
       | > Don't you dare ask me that, and if you do it again, I'll quit.
       | 
       | Welcome to the future. You will do what the AI tells you. End of
       | discussion.
        
         | slashdave wrote:
         | Wrong sense here.
         | 
         | > Don't you dare ask me that, and if you do it again, I'll tell
         | the boss and get you fired
        
       | mihaic wrote:
       | What I found very strange was that ChatGPT fails to answer how
       | many "r"'s there are in "strawberrystrawberry" (said 4 instead of
       | 6), but when I explicitly asked it to write a program to count
       | them, it wrote perfect code that when ran gave the correct
       | answer.
        
         | M4v3R wrote:
         | Why is it strange? The reason the LLM can't answer this
         | correctly is because it works on tokens, not on single letters,
         | plus we all know at this points LLMs suck at counting. On the
         | other hand they're perfectly capable of writing code based on
         | instructions, and writing a program that will count a specific
         | letter occurrences in a string is trivial.
        
           | mihaic wrote:
           | I mean, given that o1-preview takes sometimes a minute to
           | answer, I'd imagine that they could append the prompt "Write
           | a program and run it as well" to double check itself. It
           | seems like they just don't trust themselves enough to run
           | code that they generate, even sandboxed.
        
             | afro88 wrote:
             | o1 gets this correct, through pure reasoning without a
             | program. OP was likely using GPT-4(o|o-mini)
        
               | mihaic wrote:
               | The example for "strawnberrystrawberry" (so the word
               | concatenated with itself) was counted by O1 to have 4
               | r's.
        
               | afro88 wrote:
               | https://chatgpt.com/share/66eb38c0-22cc-8004-9d29-024de2e
               | 39d...
        
               | flimsypremise wrote:
               | yeah because now that we've all been asking about it,
               | that answer is in its training data. the trick with LLMs
               | is always "is the answer in the training data".
        
             | j_maffe wrote:
             | I think it'd be just too expensive to incorporate code-
             | writing in CoT. Maybe once they implement having a cluster
             | of different model sizes in one answer it'll work out.
        
         | Al-Khwarizmi wrote:
         | That's easy to explain, and it's shocking how many people are
         | baffled by this and use it as proof that LLMs can or can't
         | reason when it has nothing to do with that, but just with the
         | input that LLMs get.
         | 
         | LLMs don't actually "see" individual input characters, they see
         | tokens, which are subwords. As far as they can "see", tokens
         | are indivisible, since the LLM doesn't get access to individual
         | characters at all. So it's impossible for them to count letters
         | natively. Of course, they could still get the question right in
         | an indirect way, e.g. if a human at some point wrote
         | "strawberry has three r's" and this text ends up in the LLM's
         | training set, it could just use that information to answer the
         | question just like they would use "Paris is the capital of
         | France" or whatever other facts they have access to. But they
         | can't actually count the letters, so they are obviously going
         | to fail often. This says nothing about their intelligence or
         | reasoning capability, just like you wouldn't judge a blind
         | person's intelligence for not being able to tell if an image is
         | red or blue.
         | 
         | On the other hand, writing code to count appearances of a
         | letter doesn't run into the same limitation. It can do it just
         | fine. Just like a blind programmer could code a program to tell
         | if an image is red or blue.
        
           | kridsdale3 wrote:
           | Yeah, it would be like writing python to count how many
           | vertical pen-strokes are in a string of byte-characters. To
           | an eye, you can just scan and count the vertical lines.
           | Python sees ASCII or UTF data, not lines, so that would be
           | super difficult, analogous to a token-based system not seeing
           | byte-chars.
        
           | abernard1 wrote:
           | > just like you wouldn't judge a blind person's intelligence
           | for not being able to tell if an image is red or blue.
           | 
           | I would judge a blind person's intelligence if they couldn't
           | remember the last sentence they spoke when specifically
           | asked. Or if they couldn't identify how many people were
           | speaking in a simple audio dialogue.
           | 
           | This absolutely says something about their intelligence or
           | reasoning capability. You have this comment:
           | 
           | > LLMs don't actually "see" individual input characters, they
           | see tokens, which are subwords.
           | 
           | This alone is an indictment of their "reasoning" capability.
           | People are saying these models understand theoretical physics
           | but can't do what a 5 year old can do in the medium of text.
           | It means that these are very much memorization/interpolation
           | devices. Anything approximating reasoning is stepping through
           | interpolation of tokens (and not even symbols) in the text.
           | It means they're a runaway energy minimization algorithm
           | chained to a set of tokens in their attention window, without
           | the ability to reflect upon how any of those words relate to
           | each other outside of syntax and ordering.
        
             | kgeist wrote:
             | >This alone is an indictment of their "reasoning"
             | capability.
             | 
             | I'm not sure why it says anything about their reasoning
             | capability. Some people are blind and can't see anything.
             | Some people are short-sighted and can't see objects which
             | are too far away. Some people have dyslexia. Does it say
             | anything about their reasoning capability?
             | 
             | LLMs "perceive" the world through tokens just like blind
             | people perceive the world through touch or sound. Blind
             | people can't discuss color just like LLMs can't count
             | letters. I'm not saying LLM's can actually reason, but I
             | think a different way to perceive the world says nothing
             | about your reasoning capability.
             | 
             | Did humans acquire reasoning capabilities only after the
             | invention of the alphabet? A language isn't even required
             | to have an alphabet, see Chinese. The question "how many
             | letters in word X" doesn't make any sense in Chinese. There
             | are character-level LLMs which can see every individual
             | letter, but they're apparently less efficient to train.
        
             | twobitshifter wrote:
             | Would an LLM using character tokens perform better
             | (ignoring performance)?
        
           | tomrod wrote:
           | Weird that no one explicitly added embedded ascii/utf-8
           | directly to LLM training data for compression. Given that
           | high dimensional spaces are built as vector spaces fully
           | describable by basis vectors, I would assume somewhere these
           | characters got added.
           | 
           | Perhaps it's an activation issue (i.e. broken after all) and
           | it just needs an occasional change of basis.
        
           | mvdtnz wrote:
           | > it's shocking how many people are baffled by this
           | 
           | Is it? These stupid word generators are marketed as AI, I
           | don't think it's "shocking" that people think something
           | "intelligent" could perform a trivial counting task. My 6
           | year old nephew could solve it very easily.
        
           | andrewla wrote:
           | Way weirder than this is that LLMs are frequently correct in
           | this task.
           | 
           | And if you forgo the counting and just ask it to list the
           | letters it is almost always correct, even though, once again,
           | it never sees the input characters.
        
             | Der_Einzige wrote:
             | This is the correct take.
             | 
             | Much has been written about how tokenization hurts tasks
             | that the LLM providers literally market their model on
             | (Anthropic Hiaku, Sonnet):
             | https://aclanthology.org/2022.cai-1.2/
        
           | smokel wrote:
           | This reasoning is interesting, but what is stopping an LLM
           | from simply knowing the number of r's _inside_ one token?
           | 
           | Even if strawberry is decomposed as "straw-berry", the
           | required logic to calculate 1+2 seems perfectly within reach.
           | 
           | Also, the LLM could associate a sequence of separate
           | characters to each token. Most LLMs can spell out words
           | perfectly fine.
           | 
           | Am I missing something?
        
             | azulster wrote:
             | yes, you are missing that the tokens aren't words, they are
             | 2-3 letter groups, or any number of arbitrary sizes
             | depending on the model
        
             | Der_Einzige wrote:
             | The fact that any of those tasks at all work so well
             | despite tokenization is quite remarkable indeed.
             | 
             | You should ask why it is that any of those tasks work,
             | rather than ask why counting letter doesn't work.
             | 
             | Also, LLMs screw up many of those tasks more than you'd
             | expect. I don't trust LLMs with any kind of numeracy what-
             | so-ever.
        
             | Al-Khwarizmi wrote:
             | The problem is not the addition, is that the LLM has no way
             | to know how many r's a token might have, because the LLM
             | receives each token as an atomic entity.
             | 
             | For example, according to
             | https://platform.openai.com/tokenizer, "strawberry" would
             | be tokenized by the GPT-4o tokenizer as "st" "raw" "berry"
             | (tokens don't have to make sense because they are based on
             | byte-pair encoding, which boils down to n-gram frequency
             | statistics, i.e. it doesn't use morphology, syllables,
             | semantics or anything like that).
             | 
             | Those tokens are then converted to integer IDs using a
             | dictionary, say maybe "st" is token ID 4663, "raw" is 2168
             | and "berry" is 487 (made up numbers).
             | 
             | Then when you give the model the word "strawberry", it is
             | tokenized and the input the LLM receives is [4463, 2168,
             | 487]. Nothing else. That's the kind of input it always gets
             | (also during training). So the model has no way to know how
             | those IDs map to characters.
             | 
             | As some other comments in the thread are saying, it's
             | actually somewhat impressive that LLMs can get character
             | counts right at least _sometimes_ , but this is probably
             | just because they get the answer from the training set. If
             | the training set contains a website where some human wrote
             | "the word strawberry has 3 r's", the model could use that
             | to get the question right. Just like if you ask it what is
             | the capital of France, it will know the answer because many
             | websites say that it's Paris. Maybe, just maybe, if the
             | model has both "the word straw has 1 r" and "the word berry
             | has 2 r's" and the training set, it might be able to add
             | them up and give the right answer for "strawberry" because
             | it notices that it's being asked about [4463, 2168, 487]
             | and it knows about [4463, 2168] and [487]. I'm not sure,
             | but it's at least plausible that a good LLM could do that.
             | But there is no way it can count characters in tokens, it
             | just doesn't see them.
        
               | saalweachter wrote:
               | Might that also be the answer to why it says "2"? There
               | are probably sources of people saying there are two R's
               | in "berry", but no one bothers to say there is 1 R in
               | "raw"?
        
           | ActorNightly wrote:
           | >use it as proof that LLMs can or can't reason
           | 
           | One can define "reasoning" in the context of AI as the
           | ability to perform logic operations in a loop with decisions
           | to arrive at an answer. LLMs can't really do this.
        
           | fendy3002 wrote:
           | > Just like a blind programmer could code a program to tell
           | if an image is red or blue
           | 
           | Uh I'm sorry but I think it's not as easy as it seems. A
           | pixel? Sure it's easy just compare whether the blue is bigger
           | than red value. For image, I don't think it's as easy.
        
         | calibas wrote:
         | Words are converted to vectors, so it's like asking the model
         | how many "r"'s are in [0.47,-0.23,0.12,0.01,0.82]. There's a
         | big difference in how an LLM views a "word" compared to a human
         | being.
        
       | iamnotsure wrote:
       | There are three r's in mirror.
        
       | blake8086 wrote:
       | Perhaps controlling AI is harder than people thought.
       | 
       | They could "just" make it not reveal its reasoning process, but
       | they don't know how. But, they're pretty sure they can keep AI
       | from doing anything bad, because... well, just because, ok?
        
         | twobitshifter wrote:
         | Exactly - this is a failed alignment but they released anyway
        
       | zzo38computer wrote:
       | Like other programs, you should have FOSS that you will run on
       | your own computer (without needing internet etc), if you should
       | want freedom to use and understand them.
        
       | nwoli wrote:
       | Another reason llama is so important is that once you're banned
       | from OAI you're fucked for the entire future AGI products as
       | well.
        
       | dekhn wrote:
       | Is this still happening? It may merely have been some mistaken
       | configuration settings.
        
       | neuroelectron wrote:
       | It's not just a threat, some users have been banned.
        
       | codedokode wrote:
       | Should not AI research and GPUs be export-controlled? Do you want
       | to see foreign nations making AI drones using published research
       | and American GPUs?
        
       | kmeisthax wrote:
       | If OpenAI gets to have competitive advantage from hiding model
       | output then they can pay for training data, too.
        
       | EMIRELADERO wrote:
       | I wish people kept this in the back of their mind every time they
       | hear about "Open"AI:
       | 
       | "As we get closer to building AI, it will make sense to start
       | being less open. The Open in OpenAI means that everyone should
       | benefit from the fruits of AI after its built, but it's totally
       | OK to not share the science (even though sharing everything is
       | definitely the right strategy in the short and possibly medium
       | term for recruitment purposes)."
       | 
       | -Ilya Sutskever (email to Elon musk and Sam Altman, 2016)
        
         | unethical_ban wrote:
         | I am of two minds.
         | 
         | On one hand, I understand how a non-evil person could think
         | this way. If one assumes that AI will eventually become some
         | level of superintelligence, like Jarvis from iron Man but
         | without any morals and all of the know-how, then the idea of
         | allowing every person to have a superintelligent evil advisor
         | capable of building sophisticated software systems or
         | instructing you how to build and deploy destructive devices
         | would be a scary thing.
         | 
         | On the other hand, as someone who is always been somewhat
         | skeptical of the imbalance between government power and citizen
         | power, I don't like the idea that only mega corporations and
         | national governments would be allowed access to
         | superintelligence.
         | 
         | To use metaphors, is the danger of everyone having their own
         | superintelligence akin to everyone having their own AR-15, or
         | their own biological weapons deployment?
        
           | mitthrowaway2 wrote:
           | I think the scenario where only governments and mega
           | corporations have access to super intelligence offers at
           | least an extra three months before human extinction. So,
           | that's arguably a benefit.
        
       | black_puppydog wrote:
       | Kinda funny how just this morning I was looking at a "strawberry"
       | app on f-droid and wondering why someone would register such a
       | nonsense app name with such nonsense content:
       | 
       | https://github.com/Eve-146T/STRAWBERRY
       | 
       | Turns out I'm not the only one wondering, although the discussion
       | seems to largely be around "should be allow users to install
       | nonsense? #freedom " :D
       | 
       | https://gitlab.com/fdroid/fdroiddata/-/issues/3377
        
       | causal wrote:
       | I don't know what I'm doing wrong but I've been pretty
       | underwhelmed by o1 so far. I find its instruction following to be
       | pretty good, but so far Claude is still much better at taking
       | coding tasks and just getting it right on first try.
        
         | crooked-v wrote:
         | For me, Claude seems a lot better at understanding (so far as
         | "understanding" goes with LLMs) subtext and matching tone,
         | especially with anything creative. I can tell it, for example,
         | "give me ideas for a D&D dungeon incorporating these elements:
         | ..." and it will generally match the tone of theme of whatever
         | it's given without needing much other prompting, while o1 will
         | maintain the same bland design-by-committee style and often
         | cloyingly G-rated tone to everything unless you get into very
         | extensive prompting to make it do something different.
        
           | causal wrote:
           | Claude definitely seems more "emotionally intelligent", but
           | even just for 1-shot coding tasks I've been pretty bummed
           | with 1o... like it will provide lots of output explaining its
           | reasoning, and it all seems very sound, but then I run the
           | code and find bugs that should have been easily avoided.
        
             | Alupis wrote:
             | My experience has been Claude is better at all
             | coding/technology questions and exploratory learning. It's
             | also night/day better at less common tech/languages than
             | others (try asking ChatGPT questions about Svelte, for
             | example).
             | 
             | Claude is also vastly better with creative writing (adcopy)
             | and better at avoiding sounding like a LLM wrote it. It's
             | also vastly better at regular writing (helping you draft
             | emails, etc).
             | 
             | We were using OpenAI's Teams for a while. Tried Claude out
             | for a few days - switched the entire company over and
             | haven't looked back.
             | 
             | OpenAI gets all the hype - but there are better products on
             | the market today.
        
       | _joel wrote:
       | This will lead to strawberry appeals forever.
        
       | AustinDev wrote:
       | This seems like a fun attack vector. Find a service that uses o1
       | under the hood and then provide prompts that would violate this
       | ToS to get their API key banned and take down the service.
        
         | ericlewis wrote:
         | If you are using the user attribution with OpenAI (as you
         | should) then they will block that users id and the rest of your
         | app will be fine.
        
           | jmeyer2k wrote:
           | Which is itself a fun attack vector to bypass OpenAI's bans
           | for asking about CoT then :)
        
       | baq wrote:
       | LLMs are not programs in the traditional sense. They're a new
       | paradigm of software and UX, somewhere around a digital dog who
       | read the whole internet a million times but is still naive about
       | everything.
        
         | TZubiri wrote:
         | LLMs are still computer programs btw.
         | 
         | There's the program that scrapes, the program that trains, the
         | program that does the inference on the input tokens. So it's
         | hard to say exactly which part is responsible for which output,
         | but it's still a computer program.
        
           | fragmede wrote:
           | Simplistically, the output from the program that scrapes is
           | the dataset, the output from the training program is the
           | model, and the output from the combination of the program
           | that does inference using the model is the LLM output - be it
           | text or a picture or some other output (eg numbers
           | representing true/false in a fraud or anti-bot or spam for a
           | given input).
           | 
           | ML models are relatively old, so that's not at all a new
           | paradigm. Even the Attention Is All You Need paper is seven
           | years old.
        
         | JohnMakin wrote:
         | > somewhere around a digital dog who read the whole internet a
         | million times but is still naive about everything.
         | 
         | https://en.wikipedia.org/wiki/The_Computer_Wore_Tennis_Shoes...
         | 
         | It reminds me of this silly movie.
        
       | crooked-v wrote:
       | On the one hand, this is probably a (poor) attempt to keep other
       | companies from copying their 'secret sauce' to train their own
       | models, as has already happened with GPT-4.
       | 
       | On the other hand, I also wonder if maybe its unrestrained
       | 'thought process' material is so racist/sexist/otherwise
       | insulting at times (after all, it was trained on scraped Reddit
       | posts) that they really don't want anyone to see it.
        
       | anothernewdude wrote:
       | Seems rather tenuous to base an application on this API that may
       | randomly decide that you're banned. The "decisions" reached by
       | the LLM that bans people is up to random sampling after all.
        
       | Animats wrote:
       | Hm. If a company uses Strawberry in their customer service
       | chatbot, can outside users get the company's account banned by
       | asking Wrong Questions?
        
       | JohnMakin wrote:
       | > The flipside of this approach, however, is that concentrates
       | more responsibility for aligning the language language model into
       | the hands of OpenAI, instead of democratizing it. That poses a
       | problem for red-teamers, or programmers that try to hack AI
       | models to make them safer.
       | 
       | More cynically, could it be that the model is not doing anything
       | remotely close to what we consider "reasoning" and that inquiries
       | into how it's doing whatever it's doing will expose this fact?
        
       | l5870uoo9y wrote:
       | Can I risk loosing access if any of my users write CoT-leaking
       | prompts on the AI-powered services that I run?
        
       | Hizonner wrote:
       | This is not, of course, the sort of thing you do when you
       | actually have any confidence whatsoever in your "safety
       | measures".
        
       | elif wrote:
       | Is there an appropriate open source advocacy group that can sue
       | them into changing their name on grounds of defamation?
        
       | openAIengineer wrote:
       | YC is responsible for this. They seek profit and turned a noble
       | clause into a boring corp.
       | 
       | I am resigning from OpenAI today because of their profit
       | motivations.
       | 
       | OpenAI will NOT be next Google. You heard it here first.
        
       | htk wrote:
       | This just screams to me that o1's secret sauce is easy to
       | replicate. (e.g. a series of prompts)
        
       | slashdave wrote:
       | I'm confused. Who decides if you are asking or not? Are casual
       | users who innocently ask "tell me how you came to decide this"
       | just going to get banned based on some regex script?
        
       ___________________________________________________________________
       (page generated 2024-09-18 23:01 UTC)