[HN Gopher] OpenAI scrapped a promise to disclose key documents ...
___________________________________________________________________
OpenAI scrapped a promise to disclose key documents to the public
Author : nickthegreek
Score : 288 points
Date : 2024-01-24 19:21 UTC (3 hours ago)
(HTM) web link (www.wired.com)
(TXT) w3m dump (www.wired.com)
| d3m0t3p wrote:
| Why are people surprised that openAI is closed since we know they
| don't share anything since chatGPT was launched and they got
| billion in investments
| JumpCrisscross wrote:
| > _Why are people surprised that openAI is closed_
|
| The surprise is more at the (EDIT: brazen) pathological lying.
| pierat wrote:
| it's governed by VC execs. No shit they're lying - their
| mouths are moving.
| refulgentis wrote:
| n.b. It's not, that's why it was possible for them to move
| on from Altman
| RockCoach wrote:
| > n.b. It's not, that's why it was possible for them to
| move on from Altman
|
| That's only under the assumption that the split with
| Altman was due to the doomers vs bloomers conflict and
| not just a dirty move from OpenAI board member Adam
| D'Angelo, trying to protect his investment in Quora's AI
| Poe.
| bayindirh wrote:
| Fool me once, shame on you; fool me twice, shame on me.
|
| We passed this point 10-15 cases ago. Don't people learn what
| OpenAI is all about?
|
| Hint: Think 1984. They are Ministry of Truth.
| ben_w wrote:
| 10?
|
| This is only the 2nd or 3rd thing that seems to me even a
| little incoherent with their initially stated position, the
| other certain one being the mystery of why the board
| couldn't find anyone to replace Altman who didn't very
| quickly decide to take his side, and the other possible one
| being asking for a profit making subsidiary to raise
| capital (though at the time all the criticism I remember
| was people saying they couldn't realistically _reach_ x100
| and now it 's people ignoring that it's limited to _only_
| x100).
| bayindirh wrote:
| I'm not counting starting from Altman Saga(TM), but from
| the beginning. Promises of being open, keeping structure
| secret, changing their terms to allow military use, etc.
| etc.
|
| They state something publicly, but are headed to
| completely different trajectory in reality.
|
| This is enough for me.
| ben_w wrote:
| I was also counting from the beginning.
| YetAnotherNick wrote:
| What did they lie about objectively? The entire benefit to
| humanity statement is subjective enough to be not considered
| lying and many consider closed AI to be the safest. Changing
| their goals is also not lying.
|
| In fact, I would consider changing goals publicly to be
| better than not following the goals.
| JumpCrisscross wrote:
| > _What did they lie about objectively?_
|
| Wired claims OpenAI's "reports to US tax authorities have
| from its founding said that any member of the public can
| review copies of its governing documents, financial
| statements, and conflict of interest rules." That was
| apparently a lie.
|
| > _Changing their goals is also not lying_
|
| Changing a forward-looking commitment is. Particularly when
| it changes the moment it's called.
| JohnFen wrote:
| > What did they lie about objectively?
|
| I don't know if I'd put this in terms of a "lie" or not,
| but OpenAI's stated principles and goals are not backed up
| by their actions. They have mined other people's works in
| order to build something that they purport as being for the
| benefit of mankind in some way, when their actions actually
| indicate that they've mined other people's work in order to
| build something for the purpose of massively increasing
| their own power and wealth.
|
| I'd have more respect for them if they were at least honest
| about their intentions.
| refulgentis wrote:
| Because there's two conversations going on:
|
| #1 is whether it's free and open in the ESR sense, the more
| traditional FOSS banter we're familiar with. You're right to
| question why people would be surprised that it's not FOSS.
| Clearly isn't even close, in any form.
|
| #2 is about a hazy pseudo-religious commitment, sort of "we
| will carry the fire of the gods down from the mountain to
| benefit all humanity".
|
| It was seemingly forgotten and appears to be a Potemkin front.
|
| This is an important step-forward in establishing that
| publicly, as opposed to just back-room tittering, seeing
| through the CEO stuff, or if you know the general thrust of,
| say, what the internal arguments were in 2018.
| TaylorAlexander wrote:
| > Why are people surprised
|
| I see this type of question a lot when something is considered
| common knowledge in whatever online bubble someone is part of.
|
| But the only way to go from "everybody knows" to documented
| fact is through investigative journalism and reporting. The
| point of these stories is not to say "wow we are so surprised",
| the point is to say "this company is in fact lying and we have
| the documentation to prove it."
| simonw wrote:
| "we know they don't share anything since chatGPT was launched"
|
| That's mostly but not entirely accurate. They've released
| significant updates to Whisper since ChatGPT.
| ex3ndr wrote:
| they released one quite minor modification to largest whisper
| model and in fact it much worse than a previous.
| error9348 wrote:
| Looks like they draw a line at generative AI. CLIP / Whisper
| / Gym are open; Jukebox / GPT / DallE are not.
| pleasantpeasant wrote:
| It's best to view OpenAI as any other private tech company even
| though they try to appear as a non-profit company in the public's
| eye.
| colordrops wrote:
| Are they still trying to appear this way and is anyone still
| fooled? I don't get that impression.
| pleasantpeasant wrote:
| Maybe their lawyers are when it comes to taxes
| reddalo wrote:
| I mean, the "OpenAI" name itself has surely been chosen for
| its ambiguity.
| jefftk wrote:
| They are still legally owned by a non-profit.
|
| https://projects.propublica.org/nonprofits/organizations/810...
| JohnFen wrote:
| Which means nothing.
| Infinitesimus wrote:
| I think we all saw how that went when the non-profit board
| assumed they had any credible power.
| observationist wrote:
| Our modern American framework of rules around various types
| of incorporated entities are the wrong tool for the job of
| enabling a credible organization to achieve OpenAI's
| purported mission.
|
| What's needed is closer to a government agency like NASA,
| with multiple independent inspectors like IAEA empowered by
| law to establish guardrails, report to congress, and pump
| the brakes if needed. Think Gibson's "Turing Agency."
|
| They could mandate open sourcing the technology that is
| developed, maintain feedback channels with private and
| public enterprises, and provide the basis for sensible use
| of narrow AI while we collectively fund sensible safety,
| cognition, consciousness, and AGI research.
|
| If we woke up tomorrow to aliens visiting us from a distant
| galaxy, and one alien was 100 times more intelligent and
| capable than the average human, we would be confronted with
| something terrifying.
|
| Stuart Russell likens AI to such an alien giving us a heads
| up that it's on the way, and we may be looking at several
| years to several decades before the alien/AI arrives. We
| have a chance to get our shit together sufficient to meet
| the challenges we may face - whether or not you believe AI
| could pose an existential threat, or that it could
| destabilize civilization in horrible ways, it's probably
| unarguably rational to establish institutions and frank
| discussions now, well before any potential crisis.
|
| Heck, it's not like we hold our governments to account for
| spending at the scale of NASA - even a few tens of billions
| is a drop in the bucket, and it could also be a federal
| jobs program, incentivizing careers and research in a
| crucial technology sector.
|
| Allowing self-regulated private sector corporations
| operating in the tech market to be the fundamental drivers
| of AI is probably a recipe for dystopian abuses. This will,
| regardless of intetions, lead to further corrosion of
| individual rights to free expression, privacy, intellectual
| property, and so on. Even if a majority of the negative
| impact isn't regulatory or "official" in nature, allowing
| these companies to impose cultural shifts on us is a
| terrible thing.
|
| Companies and corporations should be subject to humans, not
| infringe on human agency. Right now we have companies that
| are effectively outside the control of any individual
| human, even the most principled and respectable CEOs,
| because the legal rules we operate them by are not aligned
| with the well-being of society at large, but the group of
| people who have invested time and/or money in the legal
| construct. It's worked pretty well, but at the speed and
| scale of the modern Tech industry, it's very clear that our
| governmental and social institutions are not equipped to
| mitigate the harms.
|
| NASA and the space race are probably the most recent and
| successful analogy to the quest for AGI, so maybe that's a
| solution worth trying again.
| lagt_t wrote:
| The non profit shell has always been a pr move. Its amazing to
| see how much the public fell for it, specially with the whole
| endeavor being led by VCs. Its like the biggest wolf in the
| shoddiest sheep costume.
| sackfield wrote:
| They really need a rebranding.
| ajsnigrutin wrote:
| ClosedAI
| ithkuil wrote:
| ShutAI
| barbazoo wrote:
| that sounds like an app that helps you to sleep better
| using ai
| otalp wrote:
| Pretty catchy name for that too! I would grab the domain
| name if i could
| barbazoo wrote:
| > Registered in: 2020
|
| too late
| nabakin wrote:
| If we spontaneously start calling them ClosedAI, it's similar
| enough that people will still know who we're talking about.
| Maybe we should start calling them ClosedAI from now on
| robotnikman wrote:
| Ive been doing that myself in discussions about them. I
| hope it catches on, what a joke that they are still called
| 'Open'AI
| JumpCrisscross wrote:
| > _people will still know who we 're talking about_
|
| Sure? It's like the folks who write $MSFT instead of
| Microsoft. I know what they mean. But it's going to cheapen
| their argument for anyone who doesn't already agree with
| them.
| Sebguer wrote:
| This has the same energy as Micro$oft.
| dkjaudyeqooe wrote:
| MoMoneyAI
| anticensor wrote:
| AI.com
| blibble wrote:
| Microsoft Bob 2.0
| rvz wrote:
| Just as closed as Microsoft and OpenAI is nothing without
| Microsoft's money.
|
| At this point it is just Microsoft's AI division and is no better
| than another Google Deepmind.
|
| Stabilty is the true 'Open AI' and at least Meta gives most of
| their AI research out in the open with papers, code,
| architecture, etc. Unlike OpenClosedAI.
| slama wrote:
| https://archive.ph/QtCWM
| erulabs wrote:
| I'm loath to be a Musk-ite, but I'd be a bit peeved if I was him
| and the article opens with 'Wealthy tech entrepreneurs including
| Elon Musk SAID they were going to be transparent but now aren't'
| and then took 8 paragraphs to point out that the only person they
| named as founding the hypocritical org was kicked out years ago,
| is now a competitor, and now calls it 'Super-Closed-Source-for-
| Maxiumum-Profit-AI'.
|
| The press is absolutely addicted to blame, and any nuance that
| gets in between blame and the headline is relegated to the bottom
| of the article, far after the pay-wall. Oh well - I'm sure in a
| few more years this sort of tactic will be applied to Altman as
| well.
|
| It's gotten so bad that when I read a headline implying
| hypocrisy, I'm actually more inclined to think the opposite,
| which is just as horrible a mental handicap as assuming it's
| correct!
| cma wrote:
| > 'Wealthy tech entrepreneurs including Elon Musk SAID they
| were going to be transparent but now aren't'
|
| The article doesn't say that. It says they launched OpenAI to
| be transparent but now it isn't. Maybe your "they" is
| ambiguous, does it refer to OpenAI or wealthy entrepreneurs
| including Musk?
|
| In the article the they isn't ambiguous, but it says something
| different overall:
|
| >Wealthy tech entrepreneurs including Elon Musk launched OpenAI
| in 2015 as a nonprofit research lab that they said would
| involve society and the public in the development of powerful
| AI, unlike Google and other giant tech companies working behind
| closed doors. In line with that spirit, OpenAI's reports to US
| tax authorities have from its founding said that any member of
| the public can review copies of its governing documents,
| financial statements, and conflict of interest rules.
|
| They refers to the entrepreneurs, but it says they said OpenAI
| would be transparent. In your rewording they presumably refers
| to the entrepreneurs, but now you make it sound like it says
| the entrepreneurs now aren't transparent, rather than OpenAI.
| mgreg wrote:
| Unsurprising but disappointing none-the-less. Let's just try to
| learn from it.
|
| It's popular in the AI space to claim altruism and openness;
| OpenAI, Anthropic and xAI (the new Musk one) all have a funky
| governance structure because they want to be a public good. The
| challenge is once any of these (or others) start to gain enough
| traction that they are seen as having a good chance at reaping
| billions in profits things change.
|
| And it's not just AI companies and this isn't new. This is art of
| human nature and will always be.
|
| We should be putting more emphasis and attention on truly open AI
| models (open training data, training source code &
| hyperparameters, model source code, weights) so the benefits of
| AI accrue to the public and not just a few companies.
|
| [edit - eliminated specific company mentions]
| ToucanLoucan wrote:
| The problem is research into AI requires investment and
| investors (by and large) expect returns, and, the technology in
| this case actually working is currently in the midst of it's
| new-and-shiny-hype-stage. You can say these organizations
| started altruistic; frankly I think that's dubious at best
| given basically all that have had the opportunity to turn their
| "research project" into a revenue generator have done; but much
| like social media and cloud infrastructure, any open source or
| truly non-profit competitor to these entities will see limited
| investment by others. And that's a problem, because the silicon
| these all run on can only be bought with dollars, not good
| vibes.
|
| It's honestly kind of frustrating to me how the tech space
| continues to just excuse this. Every major new technology since
| I've been paying attention (2004 ish?) has gone this exact same
| way. Someone builds some cool new thing, then dillholes with
| money invest in it, it becomes a product, it becomes
| enshittified, and people bemoan that process while looking for
| new shiny things. Like, I'm all for new shiny things, but what
| if we just stopped letting the rest become enshittified?
|
| As much as people have told me all my life that the profit
| motive makes companies compete to deliver the best products, I
| don't know that I've ever actually seen that pan out in my
| fucking life. What it does is it flattens all products offered
| in a given market to whatever set of often highly arbitrary and
| random aspects all the competitors seem to think is the most
| important. For an example, look at short form video, which
| started with Vine, was perfected by TikTok, and is now being
| hamfisted into Instagram, Facebook, Twitter, YouTube despite
| not really making any sense in those contexts. But the "market"
| decided that short form video is important, therefore
| everything must now have it even if it makes no sense in the
| larger product.
| pdonis wrote:
| _> As much as people have told me all my life that the profit
| motive makes companies compete to deliver the best products,
| I don 't know that I've ever actually seen that pan out_
|
| Yes, you have; you're just misidentifying the product.
| Google, Facebook, Twitter, etc. do _not_ make products for
| you and I, their users. We 're just a side effect. Their
| actual products are advertising access to your eyeballs, and
| big data. _Those_ products are highly optimized to serve
| their actual customers--which aren 't you and I. The profit
| motive is working just fine. It's just that you and I aren't
| the customers; we're third parties who get hit by the
| negative externalities.
|
| The missing piece of the "profit motive" rhetoric has always
| been that, like _any_ human motivation, it needs an
| underlying social context that sets reasonable boundaries in
| order to work. One of those reasonable boundaries used to be
| that your users should be your customers; users should not be
| an externality. Unfortunately big tech has now either
| forgotten or wilfully ignored that boundary.
| skottenborg wrote:
| Given this, it's interesting that an established company like
| Meta releases open source models. Just the other day Zuck
| mentioned an upcoming open source model being trained with a
| tremendous amount of GPU-power.
| ertgbnm wrote:
| The botched firing of Sam Altman proves that fancy governance
| structures are little more than paper shields against the
| market.
|
| Whatever has been written can be unwritten and if that fails,
| just start a new company with the same employees.
| x0x0 wrote:
| I'm not sure why you attribute that as a shield against the
| market. That seemed much more like an open employee revolt.
| And I can't think of a governance structure that is going to
| stop 90% of your employees from saying, for example, we work
| for Sam Altman, not you idiots...
| mousetree wrote:
| An employee revolt due to the market. The employees wanted
| to cash out in the secondary offering that Sam was setting
| up before the mess. It was in (market) interest to get him
| back and get the deal on track.
| AndrewKemendo wrote:
| Because at some point, the plurality of employees do not
| subordinate their personal desires to the organizational
| desires.
|
| The only organizations for which that is a persistent
| requirement are typically things like priest hoods
| romwell wrote:
| The plurality of employees are not the innovators that made
| the breakthrough possible in the first place.
|
| People are not interchangeable.
|
| _Most_ employees may have bills to pay, and will follow
| the money. The ones that matter most would have different
| motivation.
|
| Of course, of your sole goal is to create a husk that milks
| the achievement of the original team as long as it lasts
| and nothing else -- sure, you can do that.
|
| But the "organizational desires" are still desires of
| _people_ in the organization. And if those people are the
| ducks that lay the golden eggs, it might not be the
| smartest move to ignore them to prioritize the desires of
| the market for those eggs.
|
| The market is all too happy to kill the ducks if it means
| more, cheaper eggs _today_.
|
| Which is, as the adage goes, why we can't have the good
| things.
| gooseus wrote:
| "Cease quoting bylaws to those of us with yachts"
| samstave wrote:
| >>> _" The botched firing of Sam Altman proves that fancy
| governance structures are little more than paper shields_
| against the _market_."
|
| -
|
| ...Or rather ( $ ) . ( $ ) immediate hindsight eyes...
| RespectYourself wrote:
| OpenAI: pioneer in the field of fraudulently putting "open" in
| your name and being anything but.
| quantum_state wrote:
| Similar naming pattern, like North Korea calls itself "
| Democratic People's Republic of Korea" ... it cannot be
| further from being democratic.
| FireBeyond wrote:
| From _Lord of War_ :
|
| > Every faction in Africa calls themselves by these noble
| names - Liberation this, Patriotic that, Democratic
| Republic of something-or-other... I guess they can't own up
| to what they usually are: the Federation of Worse
| Oppressors Than the Last Bunch of Oppressors. Often, the
| most barbaric atrocities occur when both combatants
| proclaim themselves Freedom Fighters.
| RespectYourself wrote:
| Nice comparison. And also certain political factions in the
| USA try to hide the shamefulness of laws they propose by
| giving them names that are directly opposed to what they'll
| do.
|
| The "Defense of Marriage Act" comes to mind. There was one
| so bad that a judge ordered the authors to change it, but I
| can't find it at the moment.
| rlt wrote:
| All political factions are guilty of this. Patriot Act,
| Inflation Reduction Act, Affordable Care Act, etc.
| pphysch wrote:
| Suppose there was a country where individualism was
| prioritized. Having your own opinions, avoiding
| "groupthink", even disagreeing with others, is a point of
| pride.
|
| Suppose there was a country where collectivism was
| prioritized. Harmony, conformity and agreeing with others
| is a point of pride.
|
| Suppose both countries have similar government structures
| that allow ~everyone to vote. Would it really be surprising
| that the first country regularly has 50-50 splits, and the
| second country has virtually unanimous 100-0 voting
| outcomes? Is that outcome enough basis to judge whether one
| is "democratic" or not?
| AndrewKemendo wrote:
| The public can't benefit from any of this stuff because they're
| not in the infrastructure loop to actually assign value.
|
| The only way the public would benefit from these organizations
| is if the public are owners and there isn't really a mechanism
| for that here anywhere.
| caycep wrote:
| I guess that is the question - how to differentiate between
| "open-claiming" companies like openAI vs. "truer grass roots"
| organizations like Debian, python, linux kernel, etc? At least
| from the view point of, say, someone who is just coming smack
| into the field and without the benefit of years of watching the
| evolution/governance of each organization?
| Barrin92 wrote:
| >how to differentiate between "open-claiming" companies like
| openAI vs. "truer grass roots" organizations
|
| Honestly? The people. Calculate the distance to (American)
| venture capital and the chance they go bad is the inverse of
| that. Linus, Guido, Ian, Jean-Baptiste Kempf of VLC fame, who
| turned down seven figures, what they all have in common is
| that they're not in that orbit and had their roots in
| academia and open source or free software.
| anigbrowl wrote:
| _part of human nature and will always be_
|
| What if we just made it illegal for corporate entities
| (including nonprofits) to lie? If a company promises to
| undertake some action that's within its capacity (as opposed to
| stating goals for a future which may or may not be achievable
| due to external conditions), then it has to do with a specified
| timeframe and if it doesn't happen they can be sued or
| prosecuted.
|
| > But then they will just avoid making promises
|
| And the markets they operate in, whether commercial or not,
| will judge them accordingly.
| gwbrooks wrote:
| That's not a corporate-law issue -- it's a First Amendment
| issue with a lot of settled precedent behind it.
|
| tl;dr: You're allowed to lie, as a person or a corporation,
| as long as the lie doesn't meet pretty high bars for criminal
| behavior or public harm.
|
| Heck, you can even shout fire in a crowded theater, despite
| the famous quote that says you can't.
| rkagerer wrote:
| _open training data, training source code & hyperparameters,
| model source code, weights_
|
| I'm not an FSF hippie or anything (meant that in an endearing
| way), but even I know if it's missing these it can't be called
| "open source" in the first place.
| digging wrote:
| It isn't just money, though. Every leading AI lab is also
| terrified that another lab will beat them to [impossible-to-
| specify threshold for AGI], which provides additional incentive
| to keep their research secret.
| JohnFen wrote:
| But isn't that fear of having someone else get there first
| just a fear that they won't be able to maximize their profit
| if that happens? Otherwise, why would they be so worried
| about it?
| zer00eyz wrote:
| "Fusion is 25/10/5 years away"
|
| "string theory breakthrough to unify relativity and
| quantium mechanics"
|
| "The future will have flying cars and robots helping in the
| kitchen by 2000"
|
| "Agi is going to happen 'soon'"
|
| We got a rocket that landed like it was out of a 1950's
| black and white B movie... and this time without strings.
| We got Star Trek communicators. The rest of it is fantasy
| and wishful thinking that never quite manages to show up...
|
| Lacking a fundamental undemanding of what is holding you
| back from having the breakthrough, means you're never going
| to have the breakthrough.
|
| Credit to the AI folks, they have produced insights and
| breakthroughs and usable "stuff" unlike the string theory
| nerds.
| cyanydeez wrote:
| basically, you're discussing enshittification. When things get
| social momentum, those things get repurposed for capitalistic
| pleasure.
| 3pt14159 wrote:
| Since it's documents in question are those that are part of the
| boardroom drama it's at least understandable that they weren't
| released. I know it's fashionable to slag on OpenAI but I haven't
| given up hope in them. They've made a lot of discoveries public
| over the years and while it may be frustrating to wait on some of
| the releases they're still going to be released eventually.
| int_19h wrote:
| On the contrary, that makes it that much more damning that they
| weren't released. So much for openness.
|
| And that aside, their promise to release such things was not
| conditional.
| tptacek wrote:
| If you bought into the idea that OpenAI was developing advanced
| ML/AI technology as a public utility, isn't that a bit on you?
| They don't actually owe anybody anything, and never have, so (a)
| the time to really hammer them on this stuff was a decade ago and
| (b) they didn't actually take anything from the public to do
| this, so even a decade ago they could have said "ok whatever" and
| gotten on with their day.
|
| It would be different, maybe, if everybody else in the industry
| stood aside and let OpenAI monopolize development of transformer-
| style-AI (or whatever it is we're calling this) for the common
| good. But nobody did that; the opposite thing happened. This
| space has been a Gem Saloon knife fight just waiting to pop off
| for the entirety of OpenAI's existance.
| swalsh wrote:
| I'm not completely convinced OpenAI is not a public good. I've
| started using it at my company, we found literally millions in
| value... and it cost us about $60 in tokens.
| tptacek wrote:
| It's a company, with a good product. I like Lao Gan Ma chili
| crisp way out of proportion to what it costs me, but they're
| still just a firm. :)
| brcmthrowaway wrote:
| How did you quantify millions?
| swalsh wrote:
| We used the AI to help us find gaps which were being
| incorrectly billed. So we could just measure the
| incorrectly billed dollars directly.
| pasc1878 wrote:
| Hopefully you then confirmed that these issues actually
| existing using another method,
|
| Relying on a system to say that you are not charging
| correctly sounds rather like UK's Post Office Horizon
| system and we know that ChatGPT will hallucinate things,
| JohnFen wrote:
| That you find value in the product doesn't make them a public
| good. OpenAI is in it for the money, not for some "public
| good".
| jonathankoren wrote:
| Can't you say this about literally anything you consume?
|
| How is this different than, "I read a scientific paper, and
| unlocked millions of dollars of value, and all it cost was
| $250 for 8 pages of text. So I guess Axel-Springer is a
| public good."?
|
| Just buying and selling something doesn't make it a public
| good. Valuable sure, but selling something for a profit makes
| it _by definition_ not a public good.
| warkdarrior wrote:
| OpenAI promised in their IRS statements to provide
| documentation on their operations. So they owe the public
| something, and now they reneged on their promise.
|
| This article is just pointing out that OpenAI went back on
| their promises of financial transparency.
| tptacek wrote:
| What favorable tax treatment has OpenAI received?
| blibble wrote:
| > They don't actually owe anybody anything
|
| non-profits in most countries have to be operating to produce
| some form of public benefit
|
| is this not true in the US?
| pwb25 wrote:
| they are literally called... wait for it... OPEN-AI
|
| not closedAI
| ben_w wrote:
| Well that sounds ominous...
|
| Conflict of interest rules in particular might, as the article
| says, help clarify the ??? of last year's... thing... with firing
| Altman.
|
| Possibly.
|
| I mean, still I can't see how all the interim CEOs (chosen by the
| board themselves!) each ultimately deciding to side with Altman,
| works for almost any scenario other than the board itself having
| been blackmailed by some outside entity... but that may just be a
| failure of imagination on my part.
| fswd wrote:
| OpenAI has broken every promise it has made
| timetraveller26 wrote:
| Not too long until they rename the company to Microsoft AI.
| crowcroft wrote:
| How could the board let this happen!
|
| More seriously, this is both an obvious outcome, and also feels a
| bit shady?
|
| It's true that OpenAI needs A LOT of money/capital, and so needs
| funding and partnerships which leads to this kind of thing.
|
| But it's also true that the only reason they got exist in the
| first place and got to this point, is by pitching themselves as
| an 'open', almost public good kind of company and took donations
| based on this.
| JumpCrisscross wrote:
| > _the only reason they got exist in the first place and got to
| this point, is by pitching themselves as an 'open'_
|
| What supports this? In Column B are conventionally-structured
| AI projects.
| crowcroft wrote:
| [1] OpenAI's Nonprofit received approximately $130.5 million
| in total donations, which funded the Nonprofit's operations
| and its initial exploratory work in deep learning, safety,
| and alignment.
|
| How many of those conventionally structured AI projects
| existed before ChatGPT?
|
| Maybe the donations aren't the ONLY reason, maybe they could
| have done a normal rounding of funding and got here, but they
| didn't.
|
| I do think it's fair to say that while they got $130m in
| donations they needed A LOT more money, and so they need to
| raise somewhere, somehow. To me it's a big gray area though.
|
| [1] https://openai.com/our-structure
| Nuzzerino wrote:
| Kind of ironic that this article is behind a paywall, no?
| trinsic2 wrote:
| Based on everything I am hearing about all the harmful uses this
| tech could have on society, i'm wondering if this situation is
| alarming enough to warrant an inquiry of some kind to determine
| whats going on behind the scenes.
|
| It seems like this situation is serious enough that we cannot let
| this kind of work be privatized.
|
| Not interested in entertaining all the "this is the norm"
| arguments, that's just an attempt at getting people to normalize
| this behavior.
|
| Does anyone know if the Center of AI Safety acting for the public
| good and is this on their radar?
| JumpCrisscross wrote:
| > _wondering if this situation is alarming enough to warrant an
| inquiry of some kind to determine whats going on behind the
| scenes_
|
| OpenAI is making people rich and America look good, all while
| not doing anything obviously harmful to the public interest.
| They're not a juicy target for anyone in the public sphere. If
| _any_ one of those changes, OpenAI and possibly its leadership
| are in _extremely_ hot water with the authorities.
| trinsic2 wrote:
| > all while not doing anything obviously harmful to the
| public interest.
|
| Yeah, gonna have to challenge that:
|
| 1. We don't really if what they are doing is harming public
| interest, because we dont have access to much information
| about whats happening behind the scenes.
|
| 2. And there is enough information about this tech that leads
| to the possibility of it causing systemic damage to society
| if its not correctly controlled.
| JumpCrisscross wrote:
| > _We don 't really if what they are doing is harming
| public interest_
|
| That's potentially harmful.
|
| > _is enough information about this tech that leads to the
| possibility of it causing systemic damage_
|
| Far from established. Hypothetically harmful. Obvious harm
| would need to be present and provable. (Otherwise, it's a
| political question.)
| danielmarkbruce wrote:
| You don't have access because you aren't supposed to.
| Nothing about the founding, laws or customs of the US
| suggest that you (nor the government itself) have access to
| information about other people/companies any time you/they
| feel like "finding out what's happening behind the scenes".
|
| As for "too important to privatize"... practically all the
| important work in the world is done by private companies.
| It wasn't the government who just created vaccines for
| Covid. It isn't the government producing weapons for
| defense. It's not Joe B producing houses or electricity or
| cars or planes. That's not to say the government doesn't do
| _anything_ but the idea that the dividing line for
| government work is "super important work" is wildly wrong
| and it's much closer to the inverse.
| romwell wrote:
| >practically all the important work in the world is done
| by private companies
|
| LOL, another one thinks the US is the entire world.
| danielmarkbruce wrote:
| The comment about access is related to a US company. The
| relevant legal framework is the US. If it were a French
| company, the relevant jurisdiction would be... France.
| You may not realize this, but OpenAI is a US company.
|
| The comment about all the important work in the world
| being done by private companies was indeed a global
| comment. You may not realize this, but covid vaccines
| were made by astrazeneca (UK), BioNTech (Germany),
| several US companies and others. Defense companies are
| located in every major economy. Most countries have power
| systems which are privately owned. Commercial planes are
| mostly built by one large French company and one large US
| company. All the large producers of cars around the world
| are private companies - big ones exist in the US, Japan,
| various European countries, Korea and China.
| samstave wrote:
| Though, it should be argued that only the ignorant would
| believe that is not an _historically_ significant inflection
| point in Nefariousness as it pertains to the next few
| _fucking centuries_.
|
| So let the fleas look at their feet...
|
| Seriously - AI isnt the demise if Humanity - greed.ai is.
|
| EDIT:
|
| I plugged in the following prompt to my local thingy... It
| spit this out:
|
| -
|
| >>>P: _" AI is not the demise of Humanity, greed.ai is. Show
| how greedy humans in charge of AI entanglements are holding
| the entire of earth._" - https://i.imgur.com/OmGLYrj.jpg
| wonderwonder wrote:
| AGI is coming. Private companies move faster and more
| efficiently than government agencies, look at spaceX as an
| example.
|
| The only open question is do we want the company that creates
| AGI to be American or Chinese? Government intervention by
| people that know nothing about technology (watch any
| congressional hearing) is not going to help anyone and will
| only serve to ensure China wins the race.
| JohnFen wrote:
| > AGI is coming.
|
| That's what some people assert, but there's no solid reason
| to assume that's true. We don't even know if it's in the
| realm of the possible.
|
| > The only open question is do we want the company that
| creates AGI to be American or Chinese?
|
| That's far from the only question. I don't even think it's in
| the top 10 of the list of important questions.
| wonderwonder wrote:
| If like OP you think that the work Open AI is doing is
| going to have a such a large effect on society that private
| entities should not be able to work on it then the question
| of America Vs China is indeed one of the most important
| questions.
|
| "That's what some people assert, but there's no solid
| reason to assume that's true. We don't even know if it's in
| the realm of the possible"
|
| True, but there are a lot of very smart people getting
| handed huge amounts of money by other very smart people
| that seem to think it is.
| samstave wrote:
| Wow - I posted a very similar inquest:
|
| https://news.ycombinator.com/edit?id=39123056
|
| ---
|
| Has the following already been addressed, or even generally
| broached;
|
| Treat AI (or AGI(?) more specifically) as global Utility which
| needs us to put ALL our Technology Points into the "Information
| Age Base Level 2" skill points and create a new manner in
| dealing with the next layer in Human Society, as is rapidly
| gestating. https://i.imgur.com/P1LBKFL.png
|
| I feel this is different than what is meant by _Alignment_?
|
| It seems as though general Humanity is not handling this well,
| but it appears that there is an F-ton of opaque behavior
| amongst the inner circle of the AI pyramid that we all will
| just be involuntarily entangled in?
|
| I don't mean to sound bleak - just that feels as though that
| the reality coming down the conveyor....
| 4d4m wrote:
| Is there a warrant-canary equivalent for LLMs TOS?
| wangii wrote:
| what's next, ~~don't~~ be evil?
| tilwidnk wrote:
| It's OK, the Gates family will protect them.
| samstave wrote:
| Has the following already been addressed, or even generally
| broached;
|
| Treat AI (or AGI(?) more specifically) as global Utility which
| needs us to put ALL our Technology Points into the "Information
| Age Base Level 2" skill points and create a new manner in dealing
| with the next layer in Human Society, as is rapidly gestating.
| https://i.imgur.com/P1LBKFL.png
|
| I feel this is different than what is meant by _Alignment_?
|
| It seems as though general Humanity is not handling this well,
| but it appears that there is an F-ton of opaque behavior amongst
| the inner circle of the AI pyramid that we all will just be
| involuntarily entangled in?
|
| I don't mean to sound bleak - just that feels as though that the
| reality coming down the conveyor....
| pwb25 wrote:
| whole openAI is like a college project shitshow lol
| rat_on_the_run wrote:
| They should have rules preventing this from happening in the very
| beginning of that organization. The turn of events shows that
| their form of governance is not effective.
___________________________________________________________________
(page generated 2024-01-24 23:00 UTC)