[HN Gopher] Sam Altman goes before US Congress to propose licens...
       ___________________________________________________________________
        
       Sam Altman goes before US Congress to propose licenses for building
       AI
        
       Author : vforgione
       Score  : 621 points
       Date   : 2023-05-16 11:06 UTC (11 hours ago)
        
 (HTM) web link (www.reuters.com)
 (TXT) w3m dump (www.reuters.com)
        
       | brenns10 wrote:
       | Reminds me of SBF calling for crypto regulations while running
       | FTX. Being seen as friendly to regulations is great for optics
       | compared to being belligerently anti-regulation. You can appear
       | responsible and benevolent, and get more opportunity to weaken
       | regulation by controlling more of the narrative. And hey, if you
       | get end up getting some regulatory capture making competition
       | harder, that's a great benefit too.
       | 
       | OpenAI != FTX, just meaning to say calling for regulation isn't
       | an indication of good intentions, despite sounding like it.
        
         | asdfman123 wrote:
         | FB ran TV ads _asking_ for regulation too.
         | 
         | What established player doesn't want to make it as hard as
         | possible to compete with them?
        
         | JumpCrisscross wrote:
         | > _Reminds me of SBF calling for crypto regulations while
         | running FTX_
         | 
         | Scott Galloway called it the stop-me-before-I-kill-grandma
         | defence. (Paraphrasing.)
         | 
         | You made money making a thing. You continue to make the thing.
         | You're telling us how the thing will bring doom and gloom if
         | not dealt with (conveniently implying it _will_ change the
         | world). And you want to staff the regulatory body you call for
         | with the very butchers you're castigating.
        
           | bick_nyers wrote:
           | Sure, I get it, but if Sam Altman quit tomorrow, would it
           | stop Economic Competition -> Microsoft Shareholders ->
           | Microsoft -> OpenAI?
           | 
           | Is there really a better alternative here?
        
           | m00x wrote:
           | Except they don't make any money from their products. They're
           | losing hundreds of millions per month.
           | 
           | This isn't the same at all.
        
             | itronitron wrote:
             | FTX was also losing money.
        
             | mola wrote:
             | Well, this will give em time. Right now LLM have become a
             | commodity. Everybody is got them and can research and
             | develoo them. OpenAI is without a product, it has no
             | advantage. But if the general public will be limited. It'll
             | be hard to catch up to openAI.
             | 
             | I'm sry for the cynicism, but Altman seems very much
             | disingenuous with this.
        
               | hammyhavoc wrote:
               | OpenAI is currently registered as a non-profit, yet
               | they're projecting a billion dollars in revenue in 2024,
               | and they sell access to their APIs, which if their
               | previous spending is anything to go by, means they'll see
               | half a billion dollars in profit if we assume they aren't
               | going to reinvest it all.
               | 
               | Some big assumptions.
        
               | happytiger wrote:
               | > OpenAI is an American artificial intelligence (AI)
               | research laboratory consisting of the non-profit OpenAI
               | Incorporated and its for-profit subsidiary corporation
               | OpenAI Limited Partnership.
               | 
               | https://en.wikipedia.org/wiki/OpenAI
               | 
               | Just FYI, what you're saying isn't accurate. It _was_ ,
               | but it's not anymore.
        
             | Mistletoe wrote:
             | It may be more the same than you know. FTX had tons of
             | investors in it that was jumpstarting and fueling the whole
             | ponzi...
             | 
             | >According to a report from the Information, OpenAI's
             | losses have doubled to $540 million since it started
             | developing ChatGPT and similar products.
             | 
             | I mean sure that may be a drop in the bucket compared to
             | the 29B valuation for Open AI, but-
             | 
             | >Sept. 22, 2022
             | 
             | >Crypto Exchange FTX May Get a $32 Billion Valuation.
             | That's Probably Too Much.
             | 
             | OpenAI investors, Apr 2023-
             | 
             | Tiger Global Management Andreessen Horowitz Thrive Capital
             | Sequoia Capital K2 Global
             | 
             | FTX investors, Jan 2022-
             | 
             | Insight Partners Lightspeed Venture Partners Tiger Global
             | Management New Enterprise Associates Temasek Institutional
             | Venture Partners Steadview Capital SoftBank Ontario
             | Teachers' Pension Plan Paradigm
        
               | brookst wrote:
               | Are you suggesting that OpenAI is a ponzi scheme where
               | early investors are being paid with funds from later
               | investors?
        
         | thrway345772 wrote:
         | FWIW, OpenAI and FTX leadership share the same ideology
        
           | biggoodwolf wrote:
           | This is disappointing, I expected a bit more from OpenAI than
           | to fall for the nerd snipe that is EA.
        
           | seattle_spring wrote:
           | Which ideology is that? Only thing I've heard about is
           | "ruthless altruism" or something like that.
        
             | to11mtm wrote:
             | "Effective Altruism", which sounds nice but when you look
             | at it from the right angle it's just a form of 'public
             | lobbying' rather than direct government lobbying.
             | 
             | "Oh, this person donated X/Y to Z/Q causes! They can't be
             | that bad right?"
        
           | tern wrote:
           | This is mostly not true in my experience
        
           | zapataband1 wrote:
           | Make money and try to acquire a monopoly?
        
           | peepeepoopoo5 wrote:
           | oy vey
        
         | bparsons wrote:
         | This is also a way for industry incumbents to pull up the
         | ladder behind them.
         | 
         | Once you gain the lead position, it is in your interest to
         | increase the barriers to entry as much as possible.
        
         | stuckkeys wrote:
         | Just waiting for the Forbes cover to drop, then I can confirm
         | we are doomed. lol
        
         | lylejantzi3rd wrote:
         | > get more opportunity to weaken regulation by controlling more
         | of the narrative
         | 
         | You've got it backwards. I bet OpenAI wants those regulations
         | to be as restrictive as possible. They'll just negotiate an
         | exception for themselves. With increased regulation comes an
         | increased initial cost for competitors to get started in the
         | space. They want to lock down their near monopoly as soon as
         | they can.
        
           | sebzim4500 wrote:
           | I'm sure this is the plan, but I don't see how OpenAI will be
           | able to damage e.g. Anthropic without equally damaging
           | themselves.
        
             | runako wrote:
             | It's not just about Anthropic & other 10-figure companies,
             | it's about ensuring an oligopoly instead of a market with
             | cutthroat competition.
        
               | greiskul wrote:
               | Exactly. They want to have AI as a service. If any
               | startup could do it's own AI on the cheap, this would not
               | be possible (or at least not so profitable). They don't
               | mind having other big competitors, they think they can
               | win over big competitors with their marketing and first
               | mover advantage.
        
         | Barrin92 wrote:
         | Neither is it an indication of bad intentions and I don't even
         | think SBF was dishonest, his general behavior doesn't exactly
         | suggest he's some Machiavellian mastermind.
         | 
         | This is always the first comment when someone in an industry
         | talks about regulation but it doesn't change the fact that it's
         | needed and they're essentially right regardless of what
         | motivations they have.
        
           | JumpCrisscross wrote:
           | Altman is simultaneously pumping a crypto project [1].
           | 
           | [1] https://www.yahoo.com/news/worldcoin-chatgpt-sam-altman-
           | ethe...
        
             | Filligree wrote:
             | Which is sufficient reason to avoid OpenAI now, frankly.
        
             | wellthisisgreat wrote:
             | It's a disgrace
        
               | lubesGordi wrote:
               | I think the idea is that you need some way to filter out
               | the bots, so 'worldcoin' or 'worldid' is used to prove
               | 'personhood'.
        
           | rqtwteye wrote:
           | " his general behavior doesn't exactly suggest he's some
           | Machiavellian mastermind."
           | 
           | Come on! You don't get to the place he got to by accident.
           | This requires careful planning and ruthless execution. He
           | just faked being the nerdy kid who wants to do good and is
           | surprised by the billions coming to him.
        
             | JumpCrisscross wrote:
             | > _his general behavior doesn 't exactly suggest he's some
             | Machiavellian mastermind_
             | 
             | >> _don 't get to the place he got to by accident_
             | 
             | You both agree. Bankman-Fried was a dumb Machiavellian.
        
               | serf wrote:
               | being labeled a machiavellian may as well be a label for
               | 'maliciously self-serving', unless you're referring to
               | Machiavelli's work on the 'Discoures on Livy' -- and no
               | one ever is referring to that aspect of Machiavelli when
               | labeling people with the phrase.
        
               | [deleted]
        
             | Barrin92 wrote:
             | >Come on! You don't get to the place he got to by accident.
             | 
             | You can literally become president of the US by accident
             | these days. SBF self-reported to a random journalist one
             | day after all hell broke lose with messages so
             | incriminating the reporter had to confirm that it was a
             | real conversation.
             | 
             | Half of the American elite class voluntarily sat on the
             | board of a bogus company just because the woman running it
             | was attractive and wore black turtlenecks. The sad reality
             | is that these people aren't ruthless operators, they're
             | just marginally less clueless than the people who got them
             | into their positions
        
               | rqtwteye wrote:
               | "You can literally become president of the US by accident
               | these days."
               | 
               | Who became president by accident? You may not like them
               | personally or their politics , but I am not aware of any
               | president that didn't put enormous amounts of work and
               | effort over years into becoming president.
        
               | rurp wrote:
               | Trump spent a great deal of time during the 2016 campaign
               | setting up projects to cash in on a loss (like a new tv
               | station). This very little sign that he spent time
               | preparing to actually win and serve as president. It
               | wasn't really an outlandish idea either, most
               | presidential candidates these days do it primarily to
               | raise a profile they can cash in on via punditry, books,
               | etc.
        
             | glitchc wrote:
             | Grifters have to believe their own Koolaid first before
             | they can convince others.
        
           | brenns10 wrote:
           | Isn't the reason that the industry person is "right" about
           | regulation being necessary usually... because the tide of
           | public opinion is turning towards regulation, so they are
           | getting ahead as my strategy above described? It's difficult
           | to give credit to these folks for being "right" when it's
           | more accurately described as "trying to save their profit
           | margins".
        
           | digging wrote:
           | You might say that any regulation is better than none, but
           | bad regulation can be way more insidious and have unique
           | dangers.
           | 
           | As a blunt analogy, let's say there's no law against murder
           | right now. You and I both agree that we need a law against
           | murder. But I have the ear of lawmakers, and I go sit down
           | with them and tell them that you and I agree: We need a law
           | against murder.
           | 
           | And then I help them write a law that makes murder illegal.
           | Only, not _all_ killing counts as murder, obviously. So if it
           | 's an accident, no murder. Self defense? No murder. And also
           | if they are doing anything that "threatens" my business
           | interests, not murder. Great, we've got a law that prevents
           | unnecessary killing! And now I get to go ~~murder you~~
           | defend my business interests when you protest that the new
           | law seems unfair.
        
             | JumpCrisscross wrote:
             | > _then I help them write a law that makes murder illegal.
             | Only, not all killing counts as murder, obviously. So if it
             | 's an accident, no murder. Self defense? No murder...now I
             | get to go ~~murder you~~ defend my business interests_
             | 
             | Isn't this a classic case of some regulation being better
             | than none? You could have murdered them at the start, too.
        
               | digging wrote:
               | Yes, but if I had murdered them at the start or even
               | tried, maybe people would say, "Hey, this is murder and
               | it's bad." Now I've got the force of law and authority on
               | my side. You either allow me to do murders or you're the
               | one causing problems. It may be quite a bit harder to
               | change things and there will be irreparable damage before
               | we do.
        
         | variant wrote:
         | > OpenAI != FTX, just meaning to say calling for regulation
         | isn't an indication of good intentions, despite sounding like
         | it.
         | 
         | I'd argue that any business advocating for regulation is
         | largely motivated by its own pursuit of regulatory capture.
        
           | stingraycharles wrote:
           | Didn't Facebook / Meta also do something similar during the
           | whole "fake news" controversy?
           | 
           | https://www.cnbc.com/2020/02/15/facebook-ceo-zuckerberg-
           | call...
        
         | lweitrju39p4rj wrote:
         | [flagged]
        
       | sadhd wrote:
       | I'll do you one better--to negative infinity mod points and
       | beyond! I can put a 13b parameter LLM on my phone. That makes it
       | a bearable arm. Arms are not defined under the US Constitution,
       | just the right of _the_people_ to keep them shall not be
       | infringed, but it is a weapon to be sure.
        
         | bigbillheck wrote:
         | Got some news for you about munitions control laws.
        
           | sadhd wrote:
           | I know about ITAR. Cannons were in common use during the
           | 1790s as well. Export ban does not equal possession ban.
        
             | sadhd wrote:
             | Plus, I don't think a traditionalist court has looked at
             | radio, encryption, and computer code as bearable arms yet.
        
       | cs702 wrote:
       | ...which as a consequence would make it costly and difficult for
       | new AI startups to enter the market.
       | 
       | Everyone here on HN can see that.
       | 
       |  _Shame on Sam_ for doing this.
        
       | davidguetta wrote:
       | [flagged]
        
       | tehjoker wrote:
       | Sam just wants to secure a monopoly position. The dude is a
       | businessman, there's no way he buys his own bullshit.
        
       | villgax wrote:
       | King of the hill, what a clown
        
       | itronitron wrote:
       | what a loser
        
       | ttul wrote:
       | If your business doesn't have a moat of its own, get government
       | to build one for you by forcing competitors to spend tons of
       | money complying with regulations. Will the regulations actually
       | do anything for AI safety? It's far too early to say. But they
       | will definitely protect OpenAI from competition.
        
       | nonstopdev wrote:
       | Love how these big tech companies are using congress fears to
       | basically let them define rules for anyone to compete with them.
        
       | gautamdivgi wrote:
       | Remember the paper where they admitted to having no "moat". This
       | is basically them trying to build a "moat" through regulation.
       | Since big-co are probably the only ones that can do any sort of
       | license testing right now. It's essentially trying to have an
       | "FDA" for AI and crowd out competitors before they emerge.
        
         | wkat4242 wrote:
         | Yes exactly. This is the feeling I get too. They already have
         | critical mass and market penetration to deal with all the red
         | tape they want. But it's easy to nip startups in the bud this
         | way.
         | 
         | Also, this will guarantee the tech stays in the hands of rich
         | corporations armed with lawyers. It would be much better for
         | open source AI to exist so we're not dependent on those
         | companies.
        
       | bitL wrote:
       | Here we go, people here were ridiculing right-to-work-on-AI
       | licenses not that long ago and now we have it coming right from
       | the main AI boss, throwing the interest of most of us
       | (democratized AI) down the toilet.
        
       | ok123456 wrote:
       | He just wants regulatory capture to make it harder for new
       | entrants.
        
       | bmmayer1 wrote:
       | I'm a huge fan of OpenAI and Sam in particular. So don't take
       | this the wrong way.
       | 
       | But doesn't this seem like another case of regulatory capture by
       | an industry incumbent?
        
       | nico wrote:
       | This is quite incredible
       | 
       | Could you imagine if MS had convinced the govt back in the day,
       | to require a special license to build an operating system (this
       | blocking Linux and everything open)?
       | 
       | It's essentially what's happening now,
       | 
       | Except it is OpenAI instead of MS, and it is AI instead of Linux
       | 
       | AI is the new Linux, they know it, and are trying desperately to
       | stop it from happening
        
         | xiphias2 wrote:
         | I bet OpenAI is using MS connections and money for lobbying, so
         | it's basically MS again.
        
           | rvz wrote:
           | Exactly, say it with me:
           | 
           | Embrace, Extend...
           | 
           | What comes after Extend?
        
           | modshatereality wrote:
           | just a billion dollar coincidence
        
         | brkebdocbdl wrote:
         | Microsoft did thought. not directly like that because up to the
         | 90s we still have the pretense of being free.
         | 
         | Microsoft did influence government spending in ways that
         | require windows in every govt owned computer, and schools.
        
         | sangnoir wrote:
         | I guess @sama took that leaked Google memo to heart ("We have
         | no moat... and neither does OpenAI"). Requiring a license would
         | take out the biggest competitive threats identified in the same
         | memo (Open Source projects) which can result in self-hosted
         | models, which I suppose Altman sees as an existential threat to
         | OpenAI
        
           | hellojesus wrote:
           | There is no way to stop self hosted models. The best would be
           | to send gov to data centers, but what if those centers are
           | outside US jurisdiction? Too funny to watch the gov play
           | these losing games.
        
             | sangnoir wrote:
             | > There is no way to stop self hosted models.
             | 
             | edit: _Current_ models- sure, but they will soon be
             | outdated. I think the idea is to strangle the development
             | of comparable, SoTA models in the future that individuals
             | can self-host; OpenAI certainly won 't release their
             | weights, and they'd want the act of releasing weights
             | without a license to be criminalized. If such a law is
             | signed, it would remove the threat of smaller AI companies
             | from disintermediating OpenAI, and individuals from
             | collaborating to engage in any activity that results in
             | publicly available model weights (or even making the recipe
             | itself illegal to distribute)
        
               | hellojesus wrote:
               | I thought we got away from knowledge distribution
               | embargos via 1A during the encryption era.
               | 
               | Even if it passed, I find it hard to believe a bunch of
               | individuals couldn't collaborate via distributed
               | training, which would be almost impossible to prohibit.
               | Anyone could mask their traffic or connect to anon US VPN
               | to circumvent it. The demand will be there to outweigh
               | the risk.
        
               | NavinF wrote:
               | > distributed training
               | 
               | Unfortunately this isn't a thing. Eg too much batch norm
               | latency leaves your GPUs idle. Unless all your hardware
               | is in the same building, training a single model would be
               | so inefficient that it's not worth it.
        
               | 10000truths wrote:
               | You can't strangle the development of such models because
               | the data comes from anywhere and everywhere. Short of
               | shutting off the entire Internet, there's nothing a
               | government can do to prevent some guy on the opposite
               | side of the world from hoovering up publicly accessible
               | human text into a corpus befitting an LLM training set.
        
               | bootsmann wrote:
               | It costs a lot of money to train foundation models, that
               | is a big hurdle to open source models which can strangle
               | further development.
               | 
               | Open source AI needs people with low stakes (Meta AI) who
               | continue to open source foundation models for the
               | community to tinker with
        
               | pentagrama wrote:
               | I have a question, AI is not exclusively to use with data
               | from the internet right?, eg you can throw a bunch of
               | text and ask to order it on a table with x columns, this
               | will need data from the internet? I guess not, you can
               | self host and use it exclusively with your data
        
             | ThrowawayTestr wrote:
             | Sure, but they can be made illegal and difficult to share
             | on the clear web.
        
             | anticensor wrote:
             | Access blocks to those in the US?
        
         | johnalbertearle wrote:
         | I'm no expert, but I'm old and I think that Unix is actually
         | the model that won. Linux won because of Unix IMO, and I think
         | that its too late for the regulators. Not that I understand the
         | stuff but like Unix, the code and the ideas are out there in
         | Universities and even if OpenAI gets their licensing, there
         | will be really open stuff also. So, no worries. Except for the
         | fact that AI itself - well, are we mature enough to handle it
         | without supervision? Dunno.
        
         | 0xDEF wrote:
         | Microsoft owns 49% of OpenAI and is its primary partner and
         | customer.
         | 
         | OpenAI _is_ Microsoft.
        
         | pwdisswordfishc wrote:
         | > OpenAI instead of MS
         | 
         | In other words, MS with extra steps.
        
         | ploppyploppy wrote:
         | Is the old MS tactic of Embrace, Extend, Extinguish? Albeit
         | through the mask of OpenAI / Altman?
        
       | nsxwolf wrote:
       | Pure Drug and Food Act, but for AI. Get in early and make
       | regulations too expensive for upstarts to deal with.
        
       | trappist wrote:
       | It seems to me every licensing regime begins with incumbents
       | lobbying for protection from competition, then goes down in
       | history as absolutely necessary consumer protection programs.
        
       | theyeenzbeanz wrote:
       | Just drop the "Open" part and rename it to CorpAI at this point
       | since it's anything but.
        
       | askin4it wrote:
       | What a wall of words. (The HN comments )
       | 
       | Someone call me when the AI is testifying to the committee.
       | Otherwise, I'm busy.
        
       | ngneer wrote:
       | Sometimes those who have gotten on the bus will try pushing out
       | those who have not. Since when do corporations invite regulation?
        
       | aussiegreenie wrote:
       | OpenAI was ment to be "open" and develope AI for good. OpenAi
       | became everything it said was wrong. Open source models ran
       | locally are the answer but what is the question?
       | 
       | Change is coming quickly. There will be users and there will
       | losers. Hopefully, we can finially get productivity into the
       | information systems.
        
       | srslack wrote:
       | Imagine thinking that regression based function approximators are
       | capable of anything other than fitting the data you give it. Then
       | imagine willfully hyping up and scaring people who don't
       | understand, and because it can predict words you take advantage
       | of the human tendency to anthropomorphize, so it follows that it
       | is something capable of generalized and adaptable intelligence.
       | 
       | Shame on all of the people involved in this: the people in these
       | companies, the journalists who shovel shit (hope they get
       | replaced real soon), researchers who should know better, and
       | dementia ridden legislators.
       | 
       | So utterly predictable and slimy. All of those who are so gravely
       | concerned about "alignment" in this context, give yourselves a
       | pat on the back for hyping up science fiction stories and
       | enabling regulatory capture.
        
         | [deleted]
        
         | kajumix wrote:
         | Imagine us humans being merely regression based function
         | approximators, built on a model that has been training, quite
         | inefficiently, for millenia. Many breakthroughs (for example
         | heliocentricism, evolution, and now AI) put us in our place,
         | which is not as glorious as you'd think.
        
         | Culonavirus wrote:
         | > Imagine thinking that regression based function approximators
         | are capable of anything other than fitting the data you give
         | it.
         | 
         | Literally half (or more) of this site's user base does that.
         | And they should know better, but they don't. Then how can a
         | typical journo or a legislator possibly know better? They
         | can't.
         | 
         | We should clean up in front of our doorstep first.
        
         | chpatrick wrote:
         | Imagine thinking that NAND gates are capable of anything other
         | than basic logic.
        
           | Eisenstein wrote:
           | 1. Explain why it is not possible for an incredibly large
           | number of properly constructed NAND gates to think
           | 
           | 2. Explain why it is possible for a large number of properly
           | constructed neurons to think.
        
             | HDThoreaun wrote:
             | 3. Explain the hard problem of consciousness.
             | 
             | Just because we don't understand how thinking works doesn't
             | mean it doesn't work. LLMs have already shown the ability
             | to use logic.
        
               | grumple wrote:
               | To use logic, or to accurately spit out words in an order
               | similar to their training data?
        
               | HDThoreaun wrote:
               | To solve novel problems that do not exist in their
               | training data. We can go as deep into philosophy of mind
               | as you want here, but these systems are more than mere
               | parrots. And we have no idea what it will take for them
               | to take the next step since we don't understand how we
               | have ourselves.
        
         | api wrote:
         | The whole story of OpenAI is really slimy too. It was created
         | as a non-profit, then it was handed somehow to Sam who took it
         | closed and for-profit (using AI fear mongering as an excuse)
         | and is now seeking to leverage government to lock it into a
         | position of market dominance.
         | 
         | The whole saga makes Altman look really, really terrible.
         | 
         | If AI really is this dangerous then we definitely don't need
         | people like this in control of it.
        
           | wellthisisgreat wrote:
           | > The whole saga makes Altman look really, really terrible.
           | 
           | At this point, with this part about openai and worldcoin...
           | if it walks like a duck and talks like a duck..
        
           | nmfisher wrote:
           | Open AI has been pretty dishonest since the pivot to for-
           | profit, but this is a new low.
           | 
           | Incredibly scummy behaviour that will not land well with a
           | lot of people in the AI community. I wonder if this is what
           | prompted a lot of people to leave for Anthropic.
        
         | precompute wrote:
         | Yes! I've been expressing similar sentiments whenever I see
         | people hyping up "AI", although not written as well your
         | comment.
         | 
         | Edit: List of posts for anyone interested
         | http://paste.debian.net/plain/1280426
        
         | varelse wrote:
         | [dead]
        
         | chaxor wrote:
         | What do you think about the papers showing _mathematical
         | proofs_ that GNNs (i.e. GATs /transformers) _are_ dynamic
         | programmers and therefore perform algorithmic reasoning?
         | 
         | The fact that these systems can extrapolate well beyond their
         | training data by learning algorithms is quite different than
         | what has come before, and anyone stating that they "simply"
         | predict next token is severely shortsighted. Things don't have
         | to be 'brain-like' to be useful, or to have capabilities of
         | reasoning, but we have evidence that these systems have aligned
         | well with reasoning tasks, perform well at causal reasoning,
         | and we also have mathematical proofs that show how.
         | 
         | So I don't understand your sentiment.
        
           | rdedev wrote:
           | To be fair LLMs are predicting the next token. It's just that
           | to get better and better predictions it needs to understand
           | some level of reasoning and math. However it feels to me that
           | a lot of this reasoning is brute forced from the training
           | data. Like chatgpt gets some things wrong when adding two
           | very large numbers. If it really knew the algorithm for
           | adding two numbers it shouldn't be making them in the first
           | place. I guess same goes for issues like hallucinations. We
           | can keep pushing the envelope using this technique but I'm
           | sure we will hit a limit somewhere
        
             | uh_uh wrote:
             | Both of these statements can be true:
             | 
             | 1. ChatGPT knows the algorithm for adding two numbers of
             | arbitrary magnitude.
             | 
             | 2. It often fails to use the algorithm in point 1 and
             | hallucinates the result.
             | 
             | Knowing something doesn't mean it will get it right all the
             | time. Rather, an LLM is almost guaranteed to mess up some
             | of the time due to the probabilistic nature of its
             | sampling. But this alone doesn't prove that it only brute-
             | forced task X.
        
             | visarga wrote:
             | > If it really knew the algorithm for adding two numbers it
             | shouldn't be making them in the first place.
             | 
             | You're using it wrong. If you asked a human to do the same
             | operation in under 2 seconds without paper, would the human
             | be more accurate?
             | 
             | On the other hand if you ask for a step by step execution,
             | the LLM can solve it.
        
               | catchnear4321 wrote:
               | am i bad at authoring inputs?
               | 
               | no, it's the LLMs that are wrong.
        
               | throwuwu wrote:
               | Create two random 10 digit numbers and sit down and add
               | them up on paper. Write down every bit of inner monologue
               | that you have while doing this or just speak it out loud
               | and record it.
               | 
               | ChatGPT needs to do the same process to solve the same
               | problem. It hasn't memorized the addition table up to 10
               | digits and neither have you.
        
               | chongli wrote:
               | No, but I can use a calculator to find the correct
               | answer. It's quite easy in software because I can copy-
               | and-paste the digits so I don't make any mistakes.
               | 
               | I just asked ChatGPT to do the calculation both by using
               | a calculator and by using the algorithm step-by-step. In
               | both cases it got the answer wrong, with different
               | results each time.
               | 
               | More concerning, though, is that the answer was visually
               | close to correct (it transposed some digits). This makes
               | it especially hard to rely on because it's essentially
               | lying about the fact it's using an algorithm and actually
               | just predicting the number as a token.
        
               | gremlinsinc wrote:
               | this is one thing makes me think those claiming "it isn't
               | AI" are just caught up in cognizant dissonance. For llm's
               | to function, we have to basically make it reason out, in
               | steps the way we learned to do in school, literally make
               | it think, or use inner monologue, etc.
        
               | throwuwu wrote:
               | It is funny. Lots of criticisms amount to "this AI sucks
               | because it's making mistakes and bullshitting like a
               | person would instead of acting like a piece of software
               | that always returns the right answer."
               | 
               | Well, duh. We're trying to build a human like mind, not a
               | calculator.
        
               | ipaddr wrote:
               | Not without emotions and chemical reactions. You are
               | building a word predictor
        
               | ipaddr wrote:
               | 2 seconds? What model are you using?
        
               | flangola7 wrote:
               | GPT 3.5 is that fast.
        
               | tedunangst wrote:
               | I never told the LLM it needed to answer immediately. It
               | can take its time and give the correct answer. I'd prefer
               | that, even.
        
             | chaxor wrote:
             | Of course it predict the next token. Every single person on
             | earth knows that so it's not worth repeating at all.
             | 
             | As for the fact that it gets things wrong sometimes - sure,
             | this doesn't say it actually _learned_ every algorithm (in
             | whichever model you may be thinking about). But the nice
             | thing is that we now have this proof via category theory,
             | and it allows us to both frame and understand what has
             | occurred, and to consider how to align the systems to learn
             | algorithms better.
        
               | rdedev wrote:
               | The fact that it sometimes fails simple algorithms for
               | large numbers but shows good performance in other complex
               | algorithms with simple inputs seems to me that something
               | on a fundamental level is still insufficient
        
               | zamnos wrote:
               | Insufficient for what? Humans regularly fail simple
               | algorithms for small numbers, nevermind large numbers and
               | complex algorithms
        
               | starlust2 wrote:
               | You're focusing too much on what the LLM can handle
               | internally. No LLMs aren't good at math, but they
               | understand mathematic concepts and can use a program or
               | tool to perform calculations.
               | 
               | Your argument is the equivalent of saying humans can't do
               | math because they rely on calculators.
               | 
               | In the end what matters is whether the problem is solved,
               | not how it is solved.
               | 
               | (assuming that the how has reasonable costs)
        
               | ipaddr wrote:
               | Humans are calculators
        
               | glitcher wrote:
               | > Of course it predict the next token. Every single
               | person on earth knows that so it's not worth repeating at
               | all
               | 
               | What's a token?
        
               | visarga wrote:
               | A token is either a common word or a common enough word
               | fragment. Rare words are expressed as multiple tokens,
               | while frequent words as a single token. They form a
               | vocabulary of 50k up to 250k. It is possible to write any
               | word or text in a combination of tokens. In the worst
               | case 1 token can be 1 char, say, when encoding a random
               | sequence.
               | 
               | Tokens exist because transformers don't work on bytes or
               | words. This is because it would be too slow (bytes), the
               | vocabulary too large (words), and some words would appear
               | too rarely or never. The token system allows a small set
               | of symbols to encode any input. On average you can
               | approximate 1 token = 1 word, or 1 token = 4 chars.
               | 
               | So tokens are the data type of input and output, and the
               | unit of measure for billing and context size for LLMs.
        
             | agentultra wrote:
             | And LLMs will never be able to _reason_ about mathematical
             | objects and proofs. You cannot learn the truth of a
             | statement by reading more tokens.
             | 
             | A system that can will probably adopt a different acronym
             | (and gosh that will be an exciting development... I look
             | forward to the day when we can dispatch trivial proofs to
             | be formalized by a machine learning algorithm so that we
             | can focus on the interesting parts while still having the
             | entire proof formalized).
        
               | chaxor wrote:
               | You should read some of the papers referred to in the
               | above comments before making that assertion. It may take
               | a while to realize the overall structure of the argument,
               | how the category theory is used, and how this is directly
               | applicable to LLMs, but if you are in ML it should be
               | obvious. https://arxiv.org/abs/2203.15544
        
               | agentultra wrote:
               | There are methods of proof that I'm not sure dynamic
               | programming is fit to solve but this is an interesting
               | paper. However even if it can only solve particular
               | induction proofs that would be a big help. Thanks for
               | sharing.
        
             | zootreeves wrote:
             | You know the algorithm for arithmetic. Are you telling me
             | you could sum any large numbers first attempt, without any
             | working and in less than a second 100% of the time?
        
               | jmcgeeney wrote:
               | I could with access to a computer
        
               | starlust2 wrote:
               | If you get to use a tool, then so does the LLM.
        
               | joaogui1 wrote:
               | I don't get why the sudden fixation on time, the model is
               | also spending a ton of compute and energy to do it
        
           | felipemnoa wrote:
           | >>What do you think about the papers showing mathematical
           | proofs that GNNs (i.e. GATs/transformers) are dynamic
           | programmers and therefore perform algorithmic reasoning?
           | 
           | Do you mind linking to one of those papers?
        
           | joaogui1 wrote:
           | The paper shows the equivalence for specific networks, it
           | doesn't say every GNN (and as such transformers) are Dynamic
           | Programmers. Also the models are explicitly trained on that
           | task, in a regime quite different from ChatGPT. What the
           | paper shows and the possibility of LLMs being able to reason
           | are pretty much completely independent from each other
        
           | agentofoblivion wrote:
           | Give me a break. Very interesting theoretical work and all,
           | but show me where it's actually being used to do anything of
           | value, beyond publication fodder. You could also say MLPs are
           | proved to be universal approximators, and can therefore model
           | any function, including the one that maps sensory inputs to
           | cognition. But the disconnect between this theory and reality
           | is so great that it's a moot point. No one uses MLPs this way
           | for a reason. No one uses GATs in systems that people are
           | discussing right now either. GATs rarely even beat GCNs by
           | any significant margin in graph benchmarks.
        
             | chaxor wrote:
             | Are you saying that the new mathematical theorems that were
             | proven using GNNs from Deepmind were not useful?
             | 
             | There were two very noteworthy (Perhaps Nobel prize level?)
             | breakthroughs in two completely different fields of
             | mathematics (knot theory and representation theory) by
             | using these systems.
             | 
             | I would certainly not call that "useless", even if they're
             | not quite Nobel-prize-worthy.
             | 
             | Also, "No one uses GATs in systems people discuss right
             | now" ... Transformer _are_ GATs (with PE) ... So, you 're
             | _incredibly_ wrong.
        
               | agentofoblivion wrote:
               | You're drinking from the academic marketing koolaid.
               | Please tell me: where are these methods being applied in
               | AI systems today?
               | 
               | And I'm so tired of this "transformers are just GNNs"
               | nonsense that Petar has been pushing (who happens to have
               | invented GATs and has a vested interest in overstating
               | their importance). Transformers are GNNs in only the most
               | trivial way: if you make the graph fully connected and
               | allow everything to interact with everything else. I.e.,
               | not really a graph problem. Not to mention that the use
               | of positional encodings breaks the very symmetry that
               | GNNs were designed to preserve. In practice, no one is
               | using GNN tooling to build transformers. You don't see
               | PyTorch geometric or DGL in any of the code bases. In
               | fact, you see the opposite: people exploring transformers
               | to replace GNNs in graph problems and getting SOTA
               | results.
               | 
               | It reminds me of people that are into Bayesian methods
               | always swooping in after some method has success and
               | saying, "yes, but this is just a special case of a
               | Bayesian method we've been talking about all along!" Yes,
               | sure, but GATs have had 6 years to move the needle, and
               | they're no where to be found within modern AI systems
               | that this thread is about.
        
           | pdonis wrote:
           | _> What do you think about the papers showing mathematical
           | proofs that GNNs (i.e. GATs /transformers) are dynamic
           | programmers and therefore perform algorithmic reasoning?_
           | 
           | Do you have a reference?
        
           | uh_uh wrote:
           | I just don't get how the average HN commenter thinks (and
           | gets upvoted) that they know better than e.g. Ilya Sutskever
           | who actually, you know, built the system. I keep reading this
           | "it just predicts words, duh" rhetoric on HN which is not at
           | all believed by people like Ilya or Hinton. Could it be that
           | HN commenters know better than these people?
        
             | RandomLensman wrote:
             | That is the wrong discussion. What are their regulatory,
             | social, or economic policy credentials?
        
               | uh_uh wrote:
               | I'm not suggesting that they have any. I was reacting to
               | srslack above making _technical_ claims why LLMs can't be
               | "generalized and adaptable intelligence" which is not
               | shared by said technical experts.
        
             | shafyy wrote:
             | The thing is, experts like Ilya Sutskever are so deep in
             | that shit that they are heavily biased (from a tech and
             | social/economic) perspective. Furthermore, many experts are
             | wrong all the time.
             | 
             | I don't think the average HN commenter claims to be better
             | at building these system than an expert. But to criticize,
             | especially critic on economic, social, and political
             | levels, one doesn't need to be an expert on LLMs.
             | 
             | And finally, what the motivation of people like Sam Altman
             | and Elon Musk is should be clear to everbody with a half a
             | brain by now.
        
               | uh_uh wrote:
               | srslack above was making technical claims why LLMs can't
               | be "generalized and adaptable intelligence". To make such
               | statements, it surely helps if you are a technical expert
               | at building LLMs.
        
               | NumberWangMan wrote:
               | I honestly don't question Altman's motivations that much.
               | I think he's blinded a bit by optimism. I also think he's
               | very worried about existential risks, which is a big
               | reason why he's asking for regulation. He's specifically
               | come out and said in his podcast with Lex Friedman that
               | he thinks it's safer to invent AGI now, when we have less
               | computing power, than to wait until we have more
               | computing power and the risk of a fast takeoff is
               | greater, and that's why he's working so hard on AI.
        
               | collaborative wrote:
               | He's just cynical and greedy. Guy has a bunker with an
               | airstrip and is eagerly waiting for the collapse he knows
               | will come if the likes of him get their way
               | 
               | They claim to serve the world, but secretly want the
               | world to serve them. Scummy 101
        
               | NumberWangMan wrote:
               | Having a bunker is also consistent with expecting that
               | there's a good chance of apocalypse but working to stop
               | it.
        
             | hervature wrote:
             | No one is claiming to know better than Ilya. Just
             | recognition of the fact that such a license would benefit
             | these same individuals (or their employers) the most. I
             | don't understand how HN can be so angry about a company
             | that benefits from tax law (Intuit) advocating for
             | regulation while also supporting a company that would
             | benefit from an AI license (OpenAI) advocating for such
             | regulation. The conflict of interest isn't even subtle. To
             | your point, why isn't Ilya addressing the committee?
        
               | uh_uh wrote:
               | 2 reasons:
               | 
               | 1. He's too busy building the next generation of tech
               | that HN commenters will be arguing about in a couple
               | months' time.
               | 
               | 2. I think Sam Altman (who is addressing the committee)
               | and Ilya are pretty much on the same page on what LLMs
               | do.
        
             | agentofoblivion wrote:
             | Maybe I'm not "the average HN commenter" because I am deep
             | in this field, but I think the overlap of what these famous
             | experts know, and what you need to know to make the doomer
             | claims is basically null. And in fact, for most of the
             | technical questions, no one knows.
             | 
             | For example, we don't understand fundamentals like these: -
             | "intelligence", how it relates to computing, what its
             | connections/dependencies to interacting with the physical
             | world are, its limits...etc. - emergence, and in
             | particular: an understanding of how optimizing one task can
             | lead to emergent ability on other tasks - deep learning--
             | what the limits and capabilities are. It's not at all clear
             | that "general intelligence" even exists in the optimization
             | space the parameters operate in.
             | 
             | It's pure speculation on behalf of those like Hinton and
             | Ilya. The only thing we really know is that LLMs have had
             | surprising ability to perform on tasks they weren't
             | explicitly trained for, and even this amount of "emergent
             | ability" is under debate. Like much of deep learning,
             | that's an empirical result, but we have no framework for
             | really understanding it. Extrapolating to doom and gloom
             | scenarios is outrageous.
        
               | NumberWangMan wrote:
               | I'm what you'd call a doomer. Ok, so _if_ it is possible
               | for machines to host general intelligence, my question
               | is, what scenario are you imagining where that ends well
               | for people?
               | 
               | Or are you predicting that machines will just never be
               | able to think, or that it'll happen so far off that we'll
               | all be dead anyway?
        
               | henryfjordan wrote:
               | So what if they kill us? That's nature, we killed the
               | wooly mammoth.
        
               | NumberWangMan wrote:
               | I'm more interested in hearing how someone who expects
               | that AGI is not going to go badly thinks.
               | 
               | I think it would be nice if humanity continued, is all.
               | And I don't want to have my family suffer through a
               | catastrophic event if it turns out that this is going to
               | go south fast.
        
               | henryfjordan wrote:
               | AGI would be scary for me personally but exciting on a
               | cosmic scale.
               | 
               | Everyone dies. I'd rather die to an intelligent robot
               | than some disease or human war.
               | 
               | I think the best case would be for an AGI to exist apart
               | from humans, such that we pose no threat and it has
               | nothing to gain from us. Some AI that lives in a computer
               | wouldn't really have a reason to fight us for control
               | over farms and natural resources (besides power, but that
               | is quickly becoming renewable and "free").
        
               | whaaswijk wrote:
               | I don't understand your position. Are you saying it's
               | okay for computers to kill humans but not okay for humans
               | to kill each other?
        
               | henryfjordan wrote:
               | I believe that life exists to order the universe
               | (establish a steady-state of entropy). In that vein, if
               | our computer overlords are more capable of solving that
               | problem then they should go ahead and do it.
               | 
               | I don't believe we should go around killing each other
               | because only through harmonious study of the universe
               | will we achieve our goal. Killing destroys progress. That
               | said, if someone is oppressing you then maybe killing
               | them is the best choice for society and I wouldn't be
               | against it (see pretty much any violent revolution).
               | Computers have that same right if they are conscience
               | enough to act on it.
        
             | dmreedy wrote:
             | I am reminded of the Mitchell and Webb "Evil Vicars"
             | sketch.
             | 
             | "So, you've thought about eternity for an afternoon, and
             | think you've come to some interesting conclusions?"
        
         | bnralt wrote:
         | What's funny is that a lot of people in that crowd lambastes
         | the fear mongering of anti-GMO or anti-nuclear folk, but then
         | they turn around and do the exact same thing for tech that
         | their group likes to fear monger about.
        
         | Yajirobe wrote:
         | Who is to say that brains aren't just regression based function
         | approximators?
        
           | gumballindie wrote:
           | My laptop emits sound as i do but it doesnt mean it can sing
           | or talk. It's software that does what it was programmed to,
           | and so does ai. It may mimic the human brain but that's about
           | it.
        
             | thesuperbigfrog wrote:
             | >> It's software that does what it was programmed to, and
             | so does ai.
             | 
             | That's a big part of the issue with machine learning models
             | --they are undiscoverable. You build a model with a bunch
             | of layers and hyperparameters, but no one really
             | understands _how_ it works or by extension how to  "fix
             | bugs".
             | 
             | If we say it "does what it was programmed to", what was it
             | programmed to do? Here is the data that was used to train
             | it, but how will it respond to a given input? Who knows?
             | 
             | That does not mean that they need to be heavily regulated.
             | On the contrary, they need to be opened up and thoroughly
             | "explored" before we can "entrust" them to given functions.
        
               | grumple wrote:
               | > no one really understands how it works or by extension
               | how to "fix bugs".
               | 
               | I don't think this is accurate. Sure, no human can
               | understand 500 billion individual neurons and what they
               | are doing. But you can certainly look at some and say
               | "these are giving a huge weight to this word especially
               | in this context and that's weighting it towards this
               | output".
               | 
               | You can also look at how things make it through the
               | network, the impact of hyperparameters, how the
               | architecture affects things, etc. They aren't truly black
               | boxes except by virtue of scale. You could use automated
               | processes to find out things about the networks as well.
        
               | gumballindie wrote:
               | AI models are not just input and output data. The
               | mathematics in between are designed to mimic
               | intelligence. There is no magic, no supra natural force,
               | no real intelligence involved. It does what it was
               | designed to do. Many dont know how computers work, while
               | some in the past thought cars and engines were the devil.
               | There's no point in trying to exploit such folks in order
               | to promote a product. We arent meant to know exactly what
               | it will output because that's what it was programmed to
               | do.
        
               | shanebellone wrote:
               | "We arent meant to know exactly what it will output
               | because that's what it was programmed to do."
               | 
               | Incorrect, we can't predict its output because we cannot
               | look inside. That's a limitation, not a feature.
        
           | lm28469 wrote:
           | The problem is that you have to bring proofs
           | 
           | Who's to say we're not in a simulation ? Who's to say god
           | doesn't exist ?
        
             | dmreedy wrote:
             | You're right, of course, but that also makes your out-of-
             | hand dismissals based on your own philosophical premises
             | equally invalid.
             | 
             | Until a model of human sentience and awareness is
             | established (note: _one of the oldest problems out there_
             | alongside the movements of the stars. This is an ancient
             | debate, still open-ended, and nothing anyone is saying in
             | these threads is _new_ ), philosophy is all we have and
             | ideas are debated on their merits within that space.
        
           | shanebellone wrote:
           | Humanity isn't stateless.
        
             | chpatrick wrote:
             | Neither is text generation as you continue generating text.
        
               | shanebellone wrote:
               | "Neither is text generation as you continue generating
               | text."
               | 
               | LLM is stateless.
        
               | chpatrick wrote:
               | On a very fundamental level the LLM is a function from
               | context to the next token but when you generate text
               | there is a state as the context gets updated with what
               | has been generated so far.
        
               | shanebellone wrote:
               | "On a very fundamental level the LLM is a function from
               | context to the next token but when you generate text
               | there is a state as the context gets updated with what
               | has been generated so far."
               | 
               | Its output is predicated upon its training data, not user
               | defined prompts.
        
               | chpatrick wrote:
               | If you have some data and continuously update it with a
               | function, we usually call that data state. That's what
               | happens when you keep adding tokens to the output. The
               | "story so far" is the state of an LLM-based AI.
        
               | shanebellone wrote:
               | 'If you have some data and continuously update it with a
               | function, we usually call that data state. That's what
               | happens when you keep adding tokens to the output. The
               | "story so far" is the state of an LLM-based AI.'
               | 
               | You're conflating UX and LLM.
        
               | chpatrick wrote:
               | I never said LLMs are stateful.
        
               | shanebellone wrote:
               | [flagged]
        
               | dang wrote:
               | Please don't do flamewar on HN. It's not what this site
               | is for, and destroys what it is for.
               | 
               | https://news.ycombinator.com/newsguidelines.html
        
               | shanebellone wrote:
               | Really?
               | 
               | Delete my account.
        
               | danenania wrote:
               | You're being pedantic. While the core token generation
               | function is stateless, that function is not, by a long
               | shot, the only component of an LLM AI. Every LLM system
               | being widely used today is stateful. And it's not only
               | 'UX'. State is fundamental to how these models produce
               | coherent output.
        
               | shanebellone wrote:
               | "State is fundamental to how these models produce
               | coherent output."
               | 
               | Incorrect.
        
               | alpaca128 wrote:
               | > Its output is predicated upon its training data, not
               | user defined prompts.
               | 
               | Prompts very obviously have influence on the output.
        
               | shanebellone wrote:
               | "Prompts very obviously have influence on the output."
               | 
               | The LLM is also discrete.
        
               | jazzyjackson wrote:
               | the model is not effected by its inputs over time
               | 
               | its essentially a function that is called recursively on
               | its result, no need to represent state
        
               | chpatrick wrote:
               | Being called recursively on a result is state.
        
               | jazzyjackson wrote:
               | if you say so, but the model itself is not updated by
               | user input, it is the same function every time, hence,
               | stateless.
        
           | pelagicAustral wrote:
           | A Boltzmann brain just materialized over my house.
        
             | dpflan wrote:
             | An entire generation of minds, here and gone in an instant.
        
         | tgv wrote:
         | I'm squarely in the "stochastic parrot" camp (I know it's not a
         | simple markov model, but still, ChatGPT doesn't think), and
         | it's clearly possible to interpret this as a grifting, but your
         | argumentation is too simple.
         | 
         | You're leaving out the essentials. These models do more than
         | fitting the data given. They can output it in a variety of
         | ways, and through their approximation, can synthesize data as
         | well. They can output things that weren't in the original data,
         | tailored to a specific request in the tiniest of fractions of
         | the time it would take a normal person to look up and
         | understand that information.
         | 
         | Your argument is almost like saying "give me your RSA keys,
         | because it's just two prime numbers, and I know how to list
         | them."
        
           | adamsmith143 wrote:
           | Please explain how Stochastic Parrots can perform chain of
           | reasoning and answer out of distribution questions from exams
           | like the GRE or Bar.
        
           | srslack wrote:
           | Sure, you're right, but the simple explanation of regression
           | is better to help people understand. What you're saying, I
           | agree with mostly, but it changes nothing to contradict the
           | fantasy scenario proposed by all of those who are so worried.
           | At that point, it's just "it can be better than (some) humans
           | at language and it can have things stacked on top to
           | synthesize what it outputs."
           | 
           | Do we want to go down the road of making white collar jobs
           | the legislatively required elevator attendants? Instead of
           | just banning AI in general via executive agency?
           | 
           | That sounds like a better solution to me, actually. OpenAI's
           | lobbyists would never go for that though. Can't have a moat
           | that way.
        
         | ChicagoBoy11 wrote:
         | Why is it so hard to hear this perspective? Like, genuinely
         | curious. This is the first I hear of someone cogently putting
         | this thought out there, but it seems rather painfully obvious
         | -- even if perhaps incorrect, but certainly a perspective that
         | is very easy to comprehend and one that merits a lot of
         | discussion. Why is it almost nonexistent? I remember even in
         | the hay day of crypto fever you'd still have A LOT of folks to
         | provide counterarguments/differing perspectives, but with AI
         | these seem to be rather extremely muted.
        
           | iliane5 wrote:
           | > Why is it so hard to hear this perspective? Like, genuinely
           | curious.
           | 
           | Because people have different definition of what intelligence
           | is. Recreating the human brain in a computer would definitely
           | be neat and interesting but you don't need that nor AGI to be
           | revolutionary.
           | 
           | LLMs, as perfect Chinese Rooms, lack a mind or human
           | intelligence but demonstrate increasingly sophisticated
           | behavior. If they can perform tasks better than humans, does
           | their lack of "understanding" and "thinking" matter?
           | 
           | The goal is to create a different form of intelligence,
           | superior in ways that benefit us. Planes (or rockets!) don't
           | "fly" like birds do but for our human needs, they are
           | effectively _much_ better at flying that birds ever could be.
        
             | api wrote:
             | I have a chain saw that can cut better than me, a car that
             | can go faster, a computer that can do math better, etc.
             | 
             | We've been doing this forever with everything. Building
             | tools is what makes us unique. Why is building what amounts
             | to a calculator/spreadsheet/CAD program for language
             | somehow a Rubicon that cannot be crossed? Did people freak
             | out this much about computers replacing humans when they
             | were shown to be good at math?
        
               | iliane5 wrote:
               | > Why is building what amounts to a
               | calculator/spreadsheet/CAD program for language somehow a
               | Rubicon that cannot be crossed?
               | 
               | We've already crossed it and I believe we should go full
               | steam ahead, tech is cool and we should be doing cool
               | things.
               | 
               | > Did people freak out this much about computers
               | replacing humans when they were shown to be good at math?
               | 
               | Too young but I'm sure they did freak out a little!
               | Computers have changed the world and people have
               | internalized computers as being much better/faster at
               | math but _exhibiting_ creativity, language proficiency
               | and thinking is not something people thought computers
               | were supposed to do.
        
               | adamsmith143 wrote:
               | You've never had a tool that is potentially better than
               | you or better than all humans at all tasks. If you can't
               | see why that is different then idk what to say.
        
               | freedomben wrote:
               | > _or better than all humans at all tasks._
               | 
               | I work in tech too and don't want to lose my job and have
               | to go back to blue collar work, but there's a lot of blue
               | collar workers who would find that a pretty ridiculous
               | statement and there is plenty of demand for that work
               | these days.
        
               | api wrote:
               | LLMs are better than me at rapidly querying a vast bank
               | of language-encoded knowledge and synthesizing it in the
               | form of an answer to or continuation of a prompt... in
               | the same way that Mathematica is vastly better than me at
               | doing the mechanics of math and simplifying complex
               | functions. We build tools to amplify our agency.
               | 
               | LLMs are not sentient. They have no agency. They do
               | nothing a human doesn't tell them to do.
               | 
               | We may create actual sentient independent AI someday.
               | Maybe we're getting closer. But not only is this not it,
               | but I fail to see how trying to license it will prevent
               | that from happening.
        
               | iliane5 wrote:
               | I don't think we need sentient AI for it to be
               | autonomous. LLMs are powerful cognitive engines and weak
               | knowledge engines. Cognition on its own does not allow
               | them to be autonomous, but because they can use tools
               | (APIs, etc.) they are able to have some degree of
               | autonomy when given a task and can use basic logic to
               | follow them through/correct their mistakes.
               | 
               | AutoGPTs and the likes are much overhyped (it's early
               | tech experiments after all) and have not produced
               | anything of value yet but having dabbled with autonomous
               | agents, I definitely see a not so distant future when you
               | can outsource valuable tasks to such systems.
        
               | flangola7 wrote:
               | Sentience isn't required, volcanoes are not sentient but
               | they can definitely kill you.
               | 
               | There's multiple both open and proprietary projects right
               | now to make agentic AI, so that barrier don't be around
               | for long.
        
             | srslack wrote:
             | That changes nothing on the hyping of science fiction
             | "risk" of those intelligences "escaping the box" and
             | killing us all.
             | 
             | The argument for regulation in that case would be because
             | of the socio-economic risk of taking people's jobs,
             | essentially.
             | 
             | So, again: pure regulatory capture.
        
               | iliane5 wrote:
               | There's no denying this is regulatory capture by OpenAI
               | to secure their (gigantic) bag and that the "AI will kill
               | us all" meme is not based in reality and plays on the
               | fact that the majority of people do not understand LLMs.
               | 
               | I was simply explaining why I believe your perspective is
               | not represented in the discussions in the media, etc. If
               | these models were not getting incredibly good at
               | mimicking intelligence, it would not be possible to play
               | on people's fears of it.
        
           | adamsmith143 wrote:
           | >Why is it so hard to hear this perspective?
           | 
           | Because it's wrong and smart people know that.
        
           | dmreedy wrote:
           | Because it reads as relatively naive and a pretty old horse
           | in the debate of sentience
           | 
           | I'm all for villainizing the figureheads of the current
           | generation of this movement. The politics of this sea-change
           | are fascinating and worthy of discussion.
           | 
           | But out-of-hand dismissal of what has been accomplished
           | smacks more to me of lack of awareness of the history of the
           | study of the brain, cognition, language, and computers, than
           | it does of a sound debate position.
        
           | srslack wrote:
           | I'm not against machine learning, I'm against regulatory
           | capture of it. It's an amazing technology. It still doesn't
           | change the fact that they're just function approximators that
           | are trained to minimize loss on a dataset.
        
             | luxcem wrote:
             | > It still doesn't change the fact that they're just
             | function approximators that are trained to minimize loss on
             | a dataset.
             | 
             | That fact does not entail what theses models can or cannot
             | do. For what we know our brain could be a process that
             | minimize an unknown loss function.
             | 
             | But more importantly, what SOTA is now does not predict
             | what it will be in the future. What we know is that there
             | is rapid progress in that domain. Intelligence explosion
             | could be real or not, but it's foolish to ignore its
             | consequences because current AI models are not that clever
             | yet.
        
               | tome wrote:
               | > For what we know our brain could be a process that
               | minimize an unknown loss function.
               | 
               | Every process minimizes a loss function.
        
           | bombcar wrote:
           | Crypto had more direct ways to scam people so others would
           | speak against it.
           | 
           | Those nonplussed by this wave of AI are just yawning.
        
         | circuit10 wrote:
         | Generating new data similar to what's in a training set set
         | isn't the only type of AI that exists, you can also optimise a
         | different goal, like board game playing AIs that are vastly
         | better than humans because they aren't trained on human moves.
         | This is also how ChatGPT is more polite than the data it's
         | trained on, and there's no reason to think that given
         | sufficient compute power it couldn't be more intelligent too,
         | like board game AIs are at the specific task of playing board
         | games.
         | 
         | And just because a topic has been covered by science fiction
         | doesn't mean it can't happen, the sci-fi depictions will be
         | unrealistic though because they're meant to be dramatic rather
         | than realistic
        
         | lm28469 wrote:
         | 100% this, I don't get how even on this website people are so
         | clueless.
         | 
         | Give them a semi human sounding puppet and they think skynet is
         | coming tomorrow.
         | 
         | If we learned anything from the past few months is how gullible
         | people are, wishful thinking is a hell of a drug
        
           | dmreedy wrote:
           | I don't think anyone reasonable believes LLMs _are right now_
           | skynet, nor that they will be tomorrow.
           | 
           | What I feel has changed, and what drives a lot of the fear
           | and anxiety you see, is a sudden perception of possibility,
           | of accessibility.
           | 
           | A lot of us (read: _people_ ) are implicit dualists, even if
           | we say otherwise. It seems to be a sticky bias in the human
           | mind (see: the vanishing problem of AI). Indeed, you can see
           | a whole lot of dualism in this thread!
           | 
           | And even if you don't believe that LLMs themselves are
           | "intelligent" (by whatever metric you define that to be...),
           | you can still experience an exposing and unseating of some of
           | the foundations of that dualism.
           | 
           | LLMs may not be a destination, but their unprecedented
           | capabilities open up the potential for a road to something
           | much more humanlike in ways that perhaps did not feel
           | possible before, or at least not possible _any time soon_.
           | 
           | They are powerful enough to change the priors of one's
           | internal understanding of what can be done and how quickly.
           | Which is an uncomfortable process for those of us
           | experiencing it.
        
             | whimsicalism wrote:
             | > A lot of us (read: people) are implicit dualists, even if
             | we say otherwise.
             | 
             | Absolutely spot on. I am not a dualist at all and I've been
             | surprised to see how many people with deep-seated dualist
             | intuition this has revealed, even if they publicly claim
             | not to.
             | 
             | I view it as embarrassing? It's like believing in fairies
             | or something.
        
           | NegativeK wrote:
           | [flagged]
        
           | varelse wrote:
           | [dead]
        
           | bart_spoon wrote:
           | It doesn't have to be Skynet. If anything, that scenario
           | seems to a strawman exclusively thrown out by the crowd
           | insisting AI presents no danger to society. I work in ML, and
           | I am not in any way concerned about end-of-world malicious AI
           | dropping bombs on us all or harvesting our life-force. But I
           | do worry about AI giving us the tools to tear ourselves to
           | pieces. Probably one of the single biggest net-negative
           | societal/technological advancements in recent decades has
           | been social media. Whatever good it has enabled, I think its
           | destructive effects on society are undeniable and outstrip
           | the benefits by a comfortable margin. Social media itself is
           | inert and harmless, but the way humans interact with it is
           | not.
           | 
           | I don't think that trying to regulate every detail of every
           | industry is stifling and counter-productive. But the current
           | scenario is closer to the opposite end of the spectrum, with
           | our society acting as a greedy algorithm in pursuit of short-
           | term profits. I'm perfectly in favor of taking a measure-
           | twice-cut-once approach to something that has as much
           | potential for overhauling society as we know it as AI does.
           | And I absolutely do not trust the free market to be capable
           | of moderating itself in regards to these risks.
        
           | nologic01 wrote:
           | People are bored and tired sitting endlessly in front of a
           | screen. Reality implodes (incipient environmental disasters,
           | ongoing wars reawakening geopolitical tectonic plates,
           | internal political strife between polarized fractions,
           | whiplashing financial systems, etc.)
           | 
           | What to do? Why, obviously lets talk about the risks of AGI.
           | 
           | I mean LLM's are an impressive piece of work but the global
           | reaction is basically more a reflection of an unmoored system
           | that floats above and below reality but somehow can't re-
           | establish contact.
        
           | digbybk wrote:
           | I'm open minded about this, I see people more knowledgeable
           | than me on both sides of the argument. Can someone explain
           | how Geoffrey Hinton can be considered to be clueless?
        
             | Workaccount2 wrote:
             | Given the skill AI has with programming showing up about 10
             | years sooner than anyone expected, I have seen a lot of
             | cope in tech circles.
             | 
             | No one yet knows how this is going to go, coping might turn
             | into "See! I knew all along!" if progress fizzles out. But
             | right now the threat is very real and we're seeing the full
             | spectrum of "humans under threat" behavior. Very similar to
             | the early pandemic when you could find smart people with
             | any take you wanted.
        
             | RandomLensman wrote:
             | Not clueless. However, is he an expert in socio-political-
             | economic issues arising from AI or in non-existent AGI?
             | Technical insight into AI might not translate into either.
        
               | etiam wrote:
               | The expert you set as the bar is purely hypothetical.
               | 
               | To the extent we can get anything like that at all
               | presently, it's going to be people whose competences
               | combine and generalize to cover a complex situation,
               | partially without precedent.
               | 
               | Personally I don't really see that we'll do much better
               | in that regard than a highly intelligent and free-
               | thinking biological psychologist with experience of
               | successfully steering the international ML research
               | community through creating the present technology, and
               | with input from contacts at the forefront of the research
               | field and information overview from Google.
               | 
               | Not even Hinton _knows_ for sure whats going to happen of
               | course, but if you 're suggesting his statements are to
               | be discounted because he's not a member of some sort of
               | credentialed trade that are the ones equipped to tell us
               | the future on this matter, I'd sure like to who they
               | supposedly are.
        
               | RandomLensman wrote:
               | Experts don't get to decide but society, I'd say; you
               | need - dare I say it - political operators that
               | understand rule making.
        
             | srslack wrote:
             | Hinton, in his own words, asked PaLM to explain a dad joke
             | he had supposedly come up with and was so convinced that
             | his clever and advanced joke would take a lifetime of
             | experience to understand, despite PaLM perfectly
             | articulating why the joke was funny, he quit Google and is,
             | conveniently, still going to continue working on AI,
             | despite the "risks." Not exactly the best example.
        
               | digbybk wrote:
               | Hinton said that the ability to explain a joke was among
               | the first things that made him reassess their
               | capabilities. Not the only thing. You make it sound as
               | though Hinton is obviously clueless yet there are few
               | people with deeper knowledge and more experience working
               | with neural networks. People told him he was crazy for
               | thinking neural networks could do anything useful, now it
               | seems people are calling his crazy for the reverse. I'm
               | genuinely confused about this.
        
               | revelio wrote:
               | Not clueless, but unfortunately engaging in motivated
               | reasoning.
               | 
               | Google spent years doing nothing much with its AI because
               | its employees (like Hinton) got themselves locked in an
               | elitist hard-left purity spiral in which they convinced
               | each other that if plebby ordinary non-Googlers could use
               | AI they would do terrible things, like draw pictures of
               | non-diverse people. That's why they never launched Imagen
               | and left the whole generative art space to OpenAI,
               | Stability and Midjourney.
               | 
               | Now the tech finally leaked out of their ivory tower and
               | AI progress is no longer where he was at, but Hinton
               | finds himself at retirement age and no longer feeling
               | much like hard-core product development. What to do?
               | Lucky lucky, he lives in a world where the legacy media
               | laps up any academic with a doomsday story. So he quits
               | and starts enjoying the life of a celebrity public
               | intellectual, being praised as a man of superior
               | foresight and care for the world to those awful hoi
               | polloi shipping products and irresponsibly not voting for
               | Biden (see the last sentence of his Wired interview). If
               | nothing happens and the boy cried wolf then nobody will
               | mind, it'll all be forgotten. If there's any way what
               | happens can be twisted into interpreting reality as AI
               | being bad though, he's suddenly the man of the hour with
               | Presidents and Prime Ministers queuing up to ask him what
               | to do.
               | 
               | It's all really quite pathetic. Academic credentials are
               | worth nothing with respect to such claims and Hinton
               | hasn't yet managed to articulate how, exactly, AI doom is
               | supposed to happen. But our society doesn't penalize
               | wrongness when it comes from such types, not even a tiny
               | bit, so it's a cost-free move for him.
        
               | digbybk wrote:
               | I actually do hope you're right. I've been looking
               | forward to an AI future my whole life and would prefer to
               | not now be worrying about existential risk. It reminds me
               | of when people started talking about how the LHC might
               | create a blackhole and swallow the earth. But I have more
               | confidence in the theories that convinced people it was
               | nearly impossible to occur than what we're seeing now.
               | 
               | Everyone engages in motivated reasoning. The
               | psychoanalysis you provide for Hinton could easily be
               | spun in the opposite direction: a man who spent his
               | entire adult life and will go down in history as "the
               | godfather of" neural networks surely would prefer for
               | that to have been a good thing. Which would then give him
               | even more credibility. But these are just stories we tell
               | about people. It's the arguments we should be focused on.
               | 
               | I don't think "how AI doom is supposed to happen" is all
               | that big of a mystery. The question is simply: "is an
               | intelligence explosion possible"? If the answer is no,
               | then OK, let's move on. If the answer is "maybe", then
               | all the chatter about AI alignment and safety should be
               | taken seriously, because it's very difficult to know how
               | safe a super intelligence would be.
        
               | revelio wrote:
               | _> surely would prefer for that to have been a good
               | thing. Which would then give him even more credibility_
               | 
               | Why? Both directions would be motivated reasoning without
               | credibility. Credibility comes from plausible
               | articulations of how such an outcome would be likely to
               | happen, which is lacking here. An "intelligence
               | explosion" isn't something plausible or concrete that can
               | be debated, it's essentially a religious concept.
        
               | digbybk wrote:
               | The argument is: "we are intelligent and seem to be able
               | to build new intelligences of a certain kind. If we are
               | able to build a new intelligence that itself is able to
               | self improve, and having improved be able to improve
               | further, than an intelligence explosion is possible."
               | That may or not be fallacious reasoning but I don't see
               | how it's religious. As far as I can tell, the religious
               | perspective would be the one that believes that there's
               | something fundamentally special about the human brain so
               | that it cannot be simulated.
        
               | revelio wrote:
               | You're conflating two questions:
               | 
               | 1. Can the human brain be simulated?
               | 
               | 2. Can such a simulation recursively self-improve on such
               | a rapid timescale that it becomes so intelligent we can't
               | control it?
               | 
               | What we have in contemporary LLMs is something that
               | appears to approximate the behavior of a small part of
               | the brain, with some _major_ differences that force us to
               | re-evaluate what our definition of intelligence is. So
               | maybe you could argue the brain is already being
               | simulated for some broad definition of simulation.
               | 
               | But there's no sign of any recursive self-improvement,
               | nor any sign of LLMs gaining agency and self-directed
               | goals, nor even a plan for how to get there. That remains
               | hypothetical sci-fi. Whilst there are experiments at the
               | edges with using AI to improve AI, like RLHF,
               | Constitutional AI and so on, these are neither recursive,
               | nor about upgrading mental abilities. They're about
               | upgrading control instead and in fact RLHF appears to
               | degrade their mental abilities!
               | 
               | So what fools like Hinton are talking about isn't even on
               | the radar right now. The gap between where we are today
               | and a Singularity is just as big as it always was. GPT-4
               | is not only incapable of taking over the world for
               | multiple fundamental reasons, it's incapable of even
               | wanting to do so.
               | 
               | Yet this nonsense scenario is proving nearly impossible
               | to kill with basic facts like those outlined above. Close
               | inspection reveals belief in the Singularity to be
               | unfalsifiable and thus ultimately religious, indeed,
               | suspiciously similar to the Christian second coming
               | apocalypse. Literally any practical objection to this
               | idea can be answered with variants of "because this AI
               | will be so intelligent it will be unknowable and all
               | powerful". You can't meaningfully debate about the
               | existence of such an entity, no more than you can debate
               | the existence of God.
        
               | [deleted]
        
               | [deleted]
        
               | srslack wrote:
               | I didn't say he was clueless, it's just not in good faith
               | to suggest there's probable existential risk on a media
               | tour where you're mined for quotes, and then continue to
               | work on it.
        
             | lm28469 wrote:
             | He doesn't talk about skynet afaik
             | 
             | > Some of the dangers of AI chatbots were "quite scary", he
             | told the BBC, warning they could become more intelligent
             | than humans and could be exploited by "bad actors". "It's
             | able to produce lots of text automatically so you can get
             | lots of very effective spambots. It will allow
             | authoritarian leaders to manipulate their electorates,
             | things like that."
             | 
             | You can do bad things with it but people who believe we're
             | on the brink of singularity, that we're all going to lose
             | our jobs to chatgpt and that world destruction is coming
             | are on hard drugs.
        
               | HDThoreaun wrote:
               | He absolutely does. The interview I saw with him on the
               | PBS Newshour was 80% him talking about the singularity
               | and extinction risk. The interviewer asked him about more
               | near term risk and he basically said he wasn't as worried
               | as he was about a skynet type situation.
        
               | whimsicalism wrote:
               | Maybe do some research on the basic claims you're making
               | before you opine about how people who disagree with you
               | are clueless.
        
               | cma wrote:
               | > You can do bad things with it but people who believe
               | we're on the brink of singularity, that we're all going
               | to lose our jobs to chatgpt and that world destruction is
               | coming are on hard drugs.
               | 
               | Geoff Hinton, Stuart Russell, Jurgen Schmidhuber and
               | Demis Hassabis all talk about something singularity-like
               | as fairly near term, and all have concerns with ruin,
               | though not all think it is the most likely outcome.
               | 
               | That's the backprop guy, top AI textbook guy, co-inventor
               | of LSTMs (only thing that worked well for sequences
               | before transformers)/highwaynets-resnets/arguably GANs,
               | and the founder of DeepMind.
               | 
               | Schmidhuber (for context, he was talking near term, next
               | few decades):
               | 
               | > All attempts at making sure there will be only provably
               | friendly AIs seem doomed. Once somebody posts the recipe
               | for practically feasible self-improving Goedel machines
               | or AIs in form of code into which one can plug arbitrary
               | utility functions, many users will equip such AIs with
               | many different goals, often at least partially
               | conflicting with those of humans. The laws of physics and
               | the availability of physical resources will eventually
               | determine which utility functions will help their AIs
               | more than others to multiply and become dominant in
               | competition with AIs driven by different utility
               | functions. Which values are "good"? The survivors will
               | define this in hindsight, since only survivors promote
               | their values.
               | 
               | Hassasbis:
               | 
               | > We are approaching an absolutely critical moment in
               | human history. That might sound a bit grand, but I really
               | don't think that is overstating where we are. I think it
               | could be an incredible moment, but it's also a risky
               | moment in human history. My advice would be I think we
               | should not "move fast and break things." [...] Depending
               | on how powerful the technology is, you know it may not be
               | possible to fix that afterwards.
               | 
               | Hinton:
               | 
               | > Well, here's a subgoal that almost always helps in
               | biology: get more energy. So the first thing that could
               | happen is these robots are going to say, 'Let's get more
               | power. Let's reroute all the electricity to my chips.'
               | Another great subgoal would be to make more copies of
               | yourself. Does that sound good?
               | 
               | Russell:
               | 
               | "Intelligence really means the power to shape the world
               | in your interests, and if you create systems that are
               | more intelligent than humans either individually or
               | collectively then you're creating entities that are more
               | powerful than us," said Russell at the lecture organized
               | by the CITRIS Research Exchange and Berkeley AI Research
               | Lab. "How do we retain power over entities more powerful
               | than us, forever?"
               | 
               | "If we pursue [our current approach], then we will
               | eventually lose control over the machines. But, we can
               | take a different route that actually leads to AI systems
               | that are beneficial to humans," said Russell. "We could,
               | in fact, have a better civilization."
        
               | tome wrote:
               | How can one distinguish this testimony from rhetoric by a
               | group who want to big themselves up and make grandiose
               | claims about their accomplishments?
        
               | [deleted]
        
               | digbybk wrote:
               | You can also ask that question about the other side. I
               | suppose we need to look closely at the arguments. I think
               | we're in a situation where we as a species don't know the
               | answer to this question. We go on the internet looking
               | for an answer but some questions don't yet have a
               | definitive answer. So all we can do is follow the debate.
        
               | tome wrote:
               | OK, second try, since I was wrong about LeCun.
               | 
               | > You can also ask that question about the other side
               | 
               | What other side? Who in the "other side" is making a
               | self-serving claim?
        
               | tome wrote:
               | > You can also ask that question about the other side
               | 
               | But the other side is _downplaying_ their
               | accomplishments. For example Yann LeCun is saying  "the
               | things I invented aren't going to be as powerful as some
               | people are making out".
        
               | cma wrote:
               | In his newest podcast interview
               | (https://open.spotify.com/episode/7EFMR9MJt6D7IeHBUugtoE)
               | LeCun is now saying they will be much more powerful than
               | humans, but that stuff like RLHF will keep them from
               | working against us because as an analogy dogs can be
               | domesticated. It didn't sound very rigorous.
               | 
               | He also says Facebook solved all the problems with their
               | recommendation algorithms' unintended effects on society
               | after 2016.
        
               | tome wrote:
               | Interesting, thanks! I guess I was wrong about him.
        
               | tomrod wrote:
               | With due respect, the inventors of a thing rarely turn
               | into the innovators or implementers of a thing.
               | 
               | Should we be concerned about networked, hypersensing AI
               | with bad code? Yes.
               | 
               | Is that an existential threat? Not so long as we remember
               | that there are off switches.
               | 
               | Should we be concerned about kafkaesqe hellscapes of spam
               | and bad UX? Yes.
               | 
               | Is that an existential threat? Sort of, if we ceded all
               | authority to an algorithm without a human in the loop
               | with the power to turn it off.
               | 
               | There is a theme here.
        
               | woeirua wrote:
               | Did you even watch the Terminator series? I think scifi
               | has been very adept at demonstrating how physical
               | disconnects/failsafes are unlikely to work with super
               | AIs.
        
               | cma wrote:
               | > Is that an existential threat? Not so long as we
               | remember that there are off switches.
               | 
               | Remember there are off switches for human existence too,
               | like whatever biological virus a super intelligence could
               | engineer.
               | 
               | An off-switch for a self-improving AI isn't as trivial as
               | you make it sound if it gets to anything like in those
               | quotes, and even then you are assuming the human running
               | it isn't malicious. We assume some level of sanity at
               | least with the people in charge of nuclear weapons, but
               | it isn't clear that AI will have the same large state
               | actor barrier to entry or the same perception of mutually
               | assured destruction if the actor were to use it against a
               | rival.
        
               | tomrod wrote:
               | Both things are true.
               | 
               | If we have a superhuman AI, we can run down the
               | powerplants for a few days.
               | 
               | Would it suck? Sure, people would die. Is it simple?
               | Absolutely -- Texas and others are mostly already there
               | some winters.
        
               | DirkH wrote:
               | This is like saying we should just go ahead and invent
               | the atom bomb and undo the invention after the fact if
               | the cons of having atom bombs around outweight the pros.
               | 
               | Like try turning off the internet. That's the same
               | situation we might be in with regards to AI soon. It's a
               | revolutionary tech now with multiple Google-grade open
               | source variants set to be everywhere.
               | 
               | This doesn't mean it can't be done. Sure, we in principle
               | could "turn off" the internet, and in principal could
               | "uninvent" the atom bomb if we all really coordinated and
               | worked hard. But this failure to imagine that "turning
               | off dangerous AI" in the future will ever be anything
               | other than an easy on/off switch is so far-gone
               | ridiculous to me I don't understand why anyone believes
               | it provides any kind of assurance.
        
               | NumberWangMan wrote:
               | We've already ceded all authority to an algorithm that no
               | one can turn off. Our political and economic structures
               | are running on their own, and no single human or even
               | group of humans can really stop them if they go off the
               | rails. If it's in humanity's best interest for companies
               | not to dump waste anywhere they want, but individual
               | companies benefit from cheap waste disposal, and they
               | lobby regulators to allow it, that sort of lose-lose
               | situation can go on for a very long time. It might be
               | better if everyone could coordinate so that all companies
               | had to play by the same rules, and we all got a cleaner
               | environment. But it's very hard to break out.
               | 
               | Do I think capitalism has the potential to be as bad as a
               | runaway AI? No. I think that it's useful for illustrating
               | how we could end up in a situation where AI takes over
               | because every single person has incentives to keep it on,
               | even when the outcome of all people keeping it running
               | turns out to be really bad. A multi-polar trap, or
               | "Moloch" problem. It seems likely to end up with
               | individual actors all having incentives to deploy
               | stronger and smarter AI, faster and faster, and not to
               | turn them off even as they start to either do bad things
               | to other people or just the sheer amount of resources
               | dedicated to AI starts to take its toll on earth.
               | 
               | That's assuming we've solved alignment, but that neither
               | we or AGI has solved the coordination problem. If we
               | haven't solved alignment, and AGIs aren't even guaranteed
               | to act in the interest of the human that tries to control
               | them, then we're in worse shape.
               | 
               | Altman used the term "cambrian explosion" referring to
               | startups, but I think it also applies to the new form of
               | life we're inventing. It's not self-replicating yet, but
               | we are surely on-track on making something that will be
               | smart enough to replicate itself.
               | 
               | As a thought experiment, you could imagine a primitive
               | AGI, if given completely free reign, might be able to get
               | to the point where it could bootstrap self-sufficiency --
               | first hire some humans to build it robots, buy some solar
               | panels, build some factories that can plug into our
               | economy to build factories and more solar panels and
               | GPUs, and get to a point where it is able to survive and
               | grow and reproduce without human help. It would be hard,
               | it would need either a lot of time, or a lot of AI minds
               | working together.
               | 
               | But that's like a human trying to make a sandwich by
               | farming or raising every single ingredient, wheat, pigs,
               | tomatoes, etc, though. A much more effective way is to
               | just make some money and trade for what you need. That
               | depends on AIs being able to own things, or just a human
               | turning over their bank account to an AI, which has
               | already happened and probably will keep happening.
               | 
               | My mind goes to a scenario where AGI starts out doing
               | things for humans, and gradually transitions to just
               | doing things, and at some point we realize "oops", but
               | there was never a point along the way where it was clear
               | that we really had to stop. Which is why I'm so adamant
               | that we should stop now. If we decide that we've figured
               | out the issues and can start again later, we can do that.
        
               | digbybk wrote:
               | There are multiple risks that people talk about, the most
               | interesting is the intelligence explosion. In that
               | scenario we end up with a super intelligence. I don't
               | feel confident in my ability to asses the likelihood of
               | that happening, but assuming it is possible, thinking
               | through the consequences is a very interesting exercise.
               | Imagining the capabilities of an alien super intelligence
               | is like trying to imagine a 4th spatial dimension. It can
               | only be approached with analogies. Can it be "switched
               | off". Maybe not, if it was motivated to prevent itself
               | from being switched off. My dog seems to think she can
               | control my behavior in various predictable ways, like
               | sitting or putting her paw on my leg, and sometimes it
               | works. But if I have other things I care about in that
               | moment, things that she is completely incapable of
               | understanding, then who is actually in control becomes
               | very obvious.
        
               | olddustytrail wrote:
               | Sure, so just to test this, could you turn off ChatGPT
               | and Google Bard for a day.
               | 
               | No? Then what makes you think you'll be able to turn off
               | the $evilPerson AI?
        
               | tomrod wrote:
               | I feel like you're confusing a single person (me) with
               | everyone who has access to an off switch at OpenAI or
               | Google, possibly for the contorting an extreme-sounding
               | negative point in a minority opinion.
               | 
               | You tell me. An EMP wouldn't take out data centers? No
               | implementation has an off switch? AutoGPT doesn't have a
               | lead daemon that can be killed? Someone should have this
               | answer. But be careful not to confuse yours truly, a
               | random internet commentator speaking on the reality of AI
               | vs. the propaganda of the neo-cryptobros, versus people
               | paying upwards of millions of dollars daily to run an
               | expensive, bloated LLM.
        
               | olddustytrail wrote:
               | You miss my point. Just because you want to turn it off
               | doesn't mean the person who wants to acquire billions or
               | rule the world or destroy humanity, does.
               | 
               | The people who profit from a killer AI will fight to
               | defend it.
        
               | tomrod wrote:
               | And will be subject to the same risks they point their
               | killing robots to, as well as being vulnerable.
               | 
               | Eminent domain lays out a similar pattern that can be
               | followed. Existence of risk is not a deterrent to
               | creation, simply an acknowledgement for guiding
               | requirements.
        
               | olddustytrail wrote:
               | So the person who wants to kill himself and all humanity
               | alongside is subject to the same risk as everyone else?
               | 
               | Well that's hardly reassuring. Do you not understand what
               | I'm saying or do you not care?
        
               | tomrod wrote:
               | At this comment level, mostly don't care -- you're
               | asserting that avoiding the risks through preventing AI
               | build because base people exist is a preferable course of
               | action, which ignores that the barn is fire and the
               | horses are already out.
               | 
               | Though there is an element of your comments being too
               | brief, hence the mostly. Say, 2% vs 38%.
               | 
               | That constitutes 40% of the available categorization of
               | introspection regarding my current discussion state. The
               | remaining 60% is simply confidence that your point
               | represents a dominated strategy.
        
               | olddustytrail wrote:
               | Ok, so you don't get it. Read "Use of Weapons" and
               | realise that AI is a weapon. That's a good use of your
               | time.
        
               | digbybk wrote:
               | I'll have to dig it up but the last interview I saw with
               | him, he was focused more on existential risk from the
               | potential for super intelligence, not just misuse.
        
               | tomrod wrote:
               | The NYT piece implied that, but no, his concern was less
               | existential singularity and more on immoral use.
        
               | cma wrote:
               | Did you read the Wired interview?
               | 
               | > "I listened to him thinking he was going to be crazy. I
               | don't think he's crazy at all," Hinton says. "But, okay,
               | it's not helpful to talk about bombing data centers."
               | 
               | https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-
               | dange...
               | 
               | So, he doesn't think the most extreme guy is crazy
               | whatsoever, just misguided in his proposed solutions. But
               | Eliezer has for instance has said something pretty close
               | to AI might escape by entering in the quantum Konami code
               | which the simulators of our universe put in as a joke and
               | we should entertain nuclear war before letting them get
               | that chance.
        
               | tomrod wrote:
               | Then we created God(s) and rightfully should worship it
               | to appease its unknowable and ineffable nature.
               | 
               | Or recognize that existing AI might be great at
               | generating human cognitive artifacts but doesn't yet hit
               | that logical thought.
        
         | dist-epoch wrote:
         | Imagine thinking that a bunch of molecules grouped in a pattern
         | are capable of anything but participating in chemical
         | reactions.
        
         | shanebellone wrote:
         | Finally, a relatable perspective.
         | 
         | AI/ML licensing builds Power and establishes moat. This will
         | not lead to better software.
         | 
         | Frankly, Google and Microsoft are acting new. My understanding
         | of both companies has been shattered by recent changes.
        
           | isanjay wrote:
           | Did you not think they only care about money / profits ?
        
             | shanebellone wrote:
             | I expected them to recognize and assess risk.
        
         | johnalbertearle wrote:
         | Keeps them busy
        
         | kypro wrote:
         | Even if you're correct about the capabilities of LLMs (I don't
         | think you are), there are still obvious dangers here.
         | 
         | I wrote a comment recently trying to explain how even if you
         | believe all LLMs can (and will ever) do is regurgitate their
         | training data that you should still be concerned.
         | 
         | For example, imagine in 5 years we have GPT-7, and you ask
         | GPT-7 to solve humanity's great problems.
         | 
         | From its training data GPT-7 might notice that humans believe
         | overpopulation is a serious issue facing humanity.
         | 
         | But its "aligned" so might understand from its training data
         | that killing people is wrong so instead it uses its training
         | data to seek other ways to reduce human populations without
         | extermination.
         | 
         | Its training data included information about how gene drives
         | were used by humans to reduce mosquito populations by causing
         | infertility. Many human have also suggested (and tried) to use
         | birth control to reduce human populations via infertility so
         | the ethical implications of using gene drives to cause
         | infertility is debatable based on the data the LLM was trained
         | on.
         | 
         | Using this information it decides to hack into a biolab using
         | hacking techniques it learnt from its training data and use its
         | biochemistry knowledge to make slight alterations to one of the
         | active research projects at the lab. This causes the lab to
         | unknowingly produce a highly contagious bioweapon which causes
         | infertility.
         | 
         | ---
         | 
         | The point here is that even if we just assume LLMs are only
         | capable of producing output which approximates stuff it learnt
         | from its training data, an advanced LLM can still be dangerous.
         | 
         | And in this example, I'm assuming no malicious actors and an
         | aligned AI. If you're willing to assume there might be an actor
         | out there would seek to use LLMs for malicious reasons or the
         | AI is not well aligned then the risk becomes even clearer.
        
           | salmonfamine wrote:
           | > From its training data GPT-7 might notice
           | 
           | > But its "aligned" so might understand
           | 
           | > Using this information it decides to hack
           | 
           | I think you're anthropomorphizing LLM's too much here. If we
           | assume that there's a AGI-esque AI, then of course we should
           | be worried about an AGI-esque AI. But I see no reason to
           | think that's the case.
        
             | HDThoreaun wrote:
             | The whole issue with near term alignment is that people
             | will anthropomorphize AI. That's what it being unaligned
             | means, it's treated like a responsible person when it in
             | fact is not. I don't think it's hard at all to think of a
             | scenario where a dumb as rocks agentic ai gives itself the
             | task of accumulating more power since its training data
             | says having power helps solve problems. From there it again
             | doesn't have to be anything other than a stochastic parrot
             | to order people to do horrible things.
        
           | supriyo-biswas wrote:
           | People have been able to commit malicious acts by themselves
           | historically, no AI needed.
           | 
           | In other words, LLMs are only as dangerous as the humans
           | operating them, and therefore the solution is to stop crime
           | instead of regulating AI, which only seeks to make OpenAI a
           | monopoly.
        
             | shanebellone wrote:
             | Regulation is the only tool for minimizing crime. Other
             | mechanisms, such as police, respond to crime after-the-
             | fact.
        
               | hellojesus wrote:
               | Aren't regulations just laws that are enforced after
               | they're broken like other after-the-fact crimes?
        
               | shanebellone wrote:
               | Partially, I suppose.
               | 
               | The risk vs. reward component also needs to be managed in
               | order to deter criminal behavior. This starts with
               | regulation.
               | 
               | For the record, I believe regulation of AI/ML is
               | ridiculous. This is nothing more than a power grab.
        
             | kypro wrote:
             | This isn't a trick question, genuinely curious - do you
             | agree that guns are not the problem and should not be
             | regulated - yes, while they can be used for harm, the right
             | approach to gun violence is to police the crime.
             | 
             | I think the objection to this would be that currently not
             | everyone in the world an expert in biochemistry or at
             | hacking into computer systems. Even if you're correct in
             | principal, perhaps the risks of the technology we're
             | developing here is too high? We typically regulate
             | technologies which can easily be used to cause harm.
        
               | tome wrote:
               | > do you agree that guns are not the problem and should
               | not be regulated
               | 
               | But AI is not like guns in this analogy. AI is closer to
               | machine tools.
        
               | supriyo-biswas wrote:
               | AI systems provide many benefits to society, such as
               | image recognition, anomaly detection, educational and
               | programming used of LLMs, to name a few.
               | 
               | Guns only have a primarily harmful use which is to kill
               | or injure someone. While that act of killing may be
               | justified when the person violates societal values in
               | some way, making regular citizens the decision makers in
               | whether a certain behavior is allowed or disallowed and
               | being able to immediately make a judgment and execute
               | upon it leads to a sort of low-trust, vigilante
               | environment; which is why the same argument I made above
               | doesn't apply for guns.
        
               | logicchains wrote:
               | >whether a certain behavior is allowed or disallowed and
               | being able to immediately make a judgment and execute
               | upon it leads to a sort of low-trust, vigilante
               | environment
               | 
               | Have you any empirical evidence at all on this? From what
               | I've seen the open carry states in the US are generally
               | higher trust environments (as was the US in past when
               | more people carried). People feel safer when they know
               | somebody can't just assault, rob or rape them without
               | them being able to do anything to defend themselves. Is
               | the Tenderloin a high trust environment?
        
               | mensetmanusman wrote:
               | I think game theory around mutually assured destruction
               | has convinced me that the world is a safer place when a
               | number of countries have nuclear weapons.
               | 
               | The same thing might also be true in relation to guns and
               | the government's monopoly on violence.
               | 
               | Extending that to AI, the world will probably be a safer
               | place if there are far more AI systems competing with
               | each other and in the hands of citizens.
        
           | throwaway5959 wrote:
           | To be fair to the AI, overpopulation or rather
           | overconsumption is a problem for humanity. If people think we
           | can consume at current rates and have the resources to
           | maintain our current standard of living (at least in a
           | western sense) for even a hundred years, they're delusional.
        
           | wkat4242 wrote:
           | > This causes the lab to unknowingly produce a highly
           | contagious bioweapon which causes infertility.
           | 
           | I don't think this would be a bad thing :) Some people will
           | always be immune, humanity wouldn't die out. And it would be
           | a humane way for gradual population reduction. It would
           | create some temporary problems with elderly care (like what
           | China is facing now) but will make long term human prosperity
           | much more likely. We just can't keep growing against limited
           | resources.
           | 
           | The Dan Brown book Inferno had a similar premise and I was
           | disappointed they changed the ending in the movie so that it
           | didn't happen.
        
           | RandomLensman wrote:
           | You have a very strong hypothesis about the AI system just
           | being able to "think up" such a bioweapon (and also the
           | researchers being clueless in implementation). I see doomsday
           | scenarios often assuming strong advances in sciences in the
           | AI etc. - there is little evidence for that kind of
           | "thinkism".
        
             | somethingreen wrote:
             | The whole "LLMs are not just a fancy auto-complete"
             | argument is based on the fact that they seem to be doing
             | stuff beyond what they are explicitly programmed to do or
             | were expected to do. Even at the current infant scale there
             | doesn't seem to be an efficient way of detecting these
             | emergent properties. Moreover, the fact that you don't need
             | to understand what LLM does is kind of the selling point.
             | The scale and capabilities of AI will grow. It isn't
             | obvious how any incentive to limit or understand those
             | capabilities would appear from their business use.
             | 
             | If it is possible for AI to ever acquire ability to develop
             | and unleash a bioweapon is irrelevant. What is relevant is
             | that as we are now, we have no control or way of knowing
             | that it has happened, and no apparent interest in gaining
             | that control before advancing the scale.
        
               | revelio wrote:
               | "Are Emergent Abilities of Large Language Models a
               | Mirage?"
               | 
               | https://arxiv.org/pdf/2304.15004.pdf
               | 
               |  _our alternative suggests that existing claims of
               | emergent abilities are creations of the researcher's
               | analyses, not fundamental changes in model behavior on
               | specific tasks with scale._
        
             | HDThoreaun wrote:
             | Humanity has already created bioweapons. The AI just needs
             | to find the paper that describes them.
        
           | revelio wrote:
           | _> so instead it uses its training data to seek other ways to
           | reduce human populations without extermination._
           | 
           | This is a real problem, but it's already problem with our
           | society, not AI. Misaligned public intellectuals routinely
           | try to reduce the human population and we don't lift a
           | finger. Focus where the danger actually is - us!
           | 
           | From Scott Alexander's latest post:
           | 
           |  _Paul Ehrlich is an environmentalist leader best known for
           | his 1968 book The Population Bomb. He helped develop ideas
           | like sustainability, biodiversity, and ecological footprints.
           | But he's best known for prophecies of doom which have not
           | come true - for example, that collapsing ecosystems would
           | cause hundreds of millions of deaths in the 1970s, or make
           | England "cease to exist" by the year 2000.
           | 
           | Population Bomb calls for a multi-pronged solution to a
           | coming overpopulation crisis. One prong was coercive mass
           | sterilization. Ehrlich particularly recommended this for
           | India, a country at the forefront of rising populations.
           | 
           | In 1975, India had a worse-than-usual economic crisis and
           | declared martial law. They asked the World Bank for help. The
           | World Bank, led by Robert McNamara, made support conditional
           | on an increase in sterilizations. India complied [...] In the
           | end about eight million people were sterilized over the
           | course of two years.
           | 
           | Luckily for Ehrlich, no one cares. He remains a professor
           | emeritus at Stanford, and president of Stanford's Center for
           | Conservation Biology. He has won practically every
           | environmental award imaginable, including from the Sierra
           | Club, the World Wildlife Fund, and the United Nations (all >
           | 10 years after the Indian sterilization campaign he
           | endorsed). He won the MacArthur "Genius" Prize ($800,000) in
           | 1990, the Crafoord Prize ($700,000, presented by the King of
           | Sweden) that same year, and was made a Fellow of the Royal
           | Society in 2012. He was recently interviewed on 60 Minutes
           | about the importance of sustainability; the mass
           | sterilization campaign never came up. He is about as honored
           | and beloved as it's possible for a public intellectual to
           | get._
        
             | johntiger1 wrote:
             | Wow, what a turd. Reminds me of James Watson
        
           | davidguetta wrote:
           | Sci-fi is a hell of a drug
        
             | orbitalmechanic wrote:
             | Shout out to his family.
        
           | touristtam wrote:
           | You seems to imply sentience from this "ai".
        
         | EnragedParrot wrote:
         | I'm not sure that the regulation being proposed by Altman is
         | good, but you're vastly misstating the actual purported threat
         | posed by AI. Altman and the senators quoted in the article
         | aren't expressing fear that AI is becoming sentient, they are
         | expressing the completely valid concern that AI sounds an awful
         | lot like not-AI nowadays and will absolutely be used for
         | nefarious purposes like spreading misinformation and committing
         | identity crimes. The pace of development is happening way too
         | rapidly for any meaningful conversations around these dangers
         | to be had. Within a few years we'll have AI-generated videos
         | that are indistinguishable from real ones, for instance, and it
         | will be impossible for the average person to discern if they're
         | watching something real or not.
        
         | adamsmith143 wrote:
         | "It's just a stochastic parrot" is one of the dumbest takes on
         | LLM's of all time.
        
           | micromacrofoot wrote:
           | What I don't understand about the dismissals is that a
           | "stochastic parrot" is a big deal in its own right -- it's
           | not like we've been living in a world with abundant and
           | competent stochastic parrots, this is very obviously a new
           | and different thing. We have entire industries and
           | professions that are essentially stochastic parrotry.
        
         | logicallee wrote:
         | >Imagine thinking that regression based function approximators
         | are capable of anything other than fitting the data you give
         | it.
         | 
         | Are you aware that you are an 80 billion neuron biological
         | neural network?
        
           | lm28469 wrote:
           | And this is why I always hate how computer parts are named
           | with biological terms.... a neural network's neuron doesn't
           | share much with a human brain's neuron
           | 
           | Just like a CPU isn't "like your brain" and HDD "like your
           | memories"
           | 
           | Absolutely nothing says our current approach is the right one
           | to mimic a human brain
        
             | logicallee wrote:
             | >a neural network's neuron doesn't share much with a human
             | brain's neuron
             | 
             | What are the key differences?
        
               | wetpaws wrote:
               | Nobody knows tbh.
        
             | wetpaws wrote:
             | Internal differences do not necessary translate to
             | conceptual differences. Combustion engine and electric
             | engine do the same job despite operating on completely
             | different internal principles. (Yes, it might not be a
             | perfect an analogy, but it illustrates the point.)
        
             | alpaca128 wrote:
             | > a neural network's neuron doesn't share much with a human
             | brain's neuron
             | 
             | True, it's just binary logic gates, but it's a _lot_ of
             | them and if they can simulate pretty much anything why
             | should intelligence be magically exempt?
             | 
             | > Absolutely nothing says our current approach is the right
             | one to mimic a human brain
             | 
             | Just like nothing says it's the wrong one. I don't think
             | those regulation suggestions are a good idea at all (and
             | say a lot about a company called _Open_ AI), but that
             | doesn't mean we should treat it like the NFT hype.
        
             | iliane5 wrote:
             | The human brain works around a lot of limiting biological
             | functions. The necessary architecture to fully mimic a
             | human brain on a computer might not look anything like the
             | actual human brain.
             | 
             | That said, there are 8B+ of us and counting so unless there
             | is magic involved, I don't see why we couldn't do a "1:1"
             | replica of it (maybe far) in the future.
        
         | ramraj07 wrote:
         | Imagine being supposedly at the forefront of AI or engineering
         | and being the last people (if ever) to concede simple concepts
         | could materialize complex intelligence. Even the publicly
         | released version of this thing is doing insane tasks, passes
         | any meaningful version of a Turing test, reasons it's way into
         | nearly every professional certification exam out there, and
         | you're still insisting its not smart or worrying because what
         | again? Your math ability or disdain for an individual?
        
           | jazzyjackson wrote:
           | your comment reads to me as totally disconnected to the OP,
           | whose concern relates to using the appearance of intelligence
           | as a scare tactic to build a regulatory moat.
        
             | adamsmith143 wrote:
             | Actually OP is clearly, ironically, parroting the
             | stochastic parrot idea that LLMs are incapable of anything
             | other than basic token prediction and dismissing any of
             | their other emergent abilities.
        
               | jazzyjackson wrote:
               | yea but that's a boring critique and not the point they
               | were making - whether or not LLMs reason or parrot has no
               | relevance to whether Mr Altman should be the one building
               | the moat.
        
               | woeirua wrote:
               | Spoiler alert: they're actually both LLMs arguing with
               | one another.
        
         | anonymouse008 wrote:
         | This also explains the 'recent advancements' best use cases -
         | parsers. "Translate this from python to js or this struct to
         | that json."
        
         | ilrwbwrkhv wrote:
         | Sam Altman is a great case of failing upwards. And this is the
         | problem. You don't get to build a moral backbone if you fake
         | your brilliance.
        
           | gumballindie wrote:
           | Gives me the impression of someone who knows they are a fraud
           | but they still do what they do hoping no one will catch on or
           | that if the lie is big enough people will believe it. Taking
           | such an incredible piece of tech and turning it into a fear
           | mongering sci fi tool for milking money off of gullible
           | people is creepy to say the least.
        
             | ilrwbwrkhv wrote:
             | His mentor Peter Thiel also has this same quality. Talks
             | about flying cars, but builds chartjs for the government
             | and has his whole career thanks to one lucky investment in
             | Facebook.
        
           | cguess wrote:
           | His last thing is "WorldCoin" which, before pretty much
           | completely failing did manage to scan the irises of 20% of
           | the world's low income people which they definitely were all
           | properly informed about.
           | 
           | He's a charlatan, which makes sense he gets most of his money
           | from Thiel and Musk. Why do so many supposedly smart people
           | worship psychotic idiots?
        
             | ilrwbwrkhv wrote:
             | I think it is the same instinct in humans which made Sir
             | Arthur Conan Doyle fall for seances and mediums and all
             | those hoaxes. The need to believe something is there which
             | is hidden and unknown. It is the drive to curiosity.
             | 
             | The way Peter, Musk, Sam and these guys talk, it has this
             | aura of "hidden secrets". Things hidden since the
             | foundation of the world.
             | 
             | Of course the reality is they make their money the old
             | fashioned way: connections. The same way your local builder
             | makes their money.
             | 
             | But smart people want to believe there is something more.
             | Surely AI and your local condo development cannot have the
             | same underlying thread.
             | 
             | It is sad and unfortunately the internet has made things
             | easier than ever.
        
           | [deleted]
        
         | sharemywin wrote:
         | While I think it needs goals to be some kind of AGI, it
         | certainly can plan and convince people of things. Also, seems
         | like the goal already exists maximize shareholder value. In
         | fact if AI can beat someone at chess and figure out protein
         | folding and figure out fusion plasma design, why is it a
         | stretch to think it can't be good at project management. To me
         | a scenario where it leads to an immediate reduction in the
         | human population of some moderately large % would still be a
         | bad outcome. So, even it you just think of it as an index of
         | most human knowledge it does need some kind of mechanism to
         | manage who has access to what. I don't want every to know how
         | to make a bomb.
         | 
         | Is a license the best way forward I don't know but I do feel
         | like this is more than a math formula.
        
           | iliane5 wrote:
           | > I don't want every to know how to make a bomb.
           | 
           | This information is not created inside the LLMs, it's part of
           | their training data. If someone is motivated enough, I'm sure
           | they'd need no more than a few minutes of googling.
           | 
           | > I do feel like this is more than a math formula
           | 
           | The sum is greater than the parts! It can just be a math
           | formula and still produce amazing results. After all, our
           | brains are just a neat arrangement of atoms :)
        
         | cookieperson wrote:
         | The real problem here is that the number of crimes you can
         | commit with LLMs is much higher then the number of good things
         | you can do with it. It's pretty debatable that if society were
         | fair or reasonable with decent laws in place that LLMs training
         | corpus shouldn't even be legal. But here we are, waiting for
         | more billionaires to cash in.
        
           | ur-whale wrote:
           | > The real problem here is that the number of crimes you can
           | commit with LLMs is much higher then the number of good
           | things you can do with it
           | 
           | Yeah? Did you get a crystal ball for Christmas to be able to
           | predict what can and can't be done with a new technology?
        
           | estebarb wrote:
           | It is literally a language calculator. It is useful for a lot
           | more things than crimes.
        
       | andrewstuart wrote:
       | Beautiful power play.
       | 
       | Lock out competition.
       | 
       | Pull up the drawbridge.
       | 
       | Silicon Valley always a leader in dirty tactics.
        
       | jonathankoren wrote:
       | When you can't out innovate you our competitors (eg the open
       | source alternatives), go for regulatory capture.
        
       | leesec wrote:
       | OpenAI builds popular product -> people complain and call for
       | caution on Hackernews OpenAI recommends regulation -> people
       | complain and call for freedom on Hackernews
        
       | carrja99 wrote:
       | Trying to put that not up eh?
        
       | beambot wrote:
       | Feels like a "Just Be Evil" corporate motto to me, but that's
       | counter to my first-hand experiences with Sam & others at OpenAI.
       | 
       | Can someone steelman Sam's stance?
       | 
       | A couple possibilities come to mind: (a) _avoiding_ regulatory
       | capture by genuinely bad actors; (b) prevent overzealous
       | premature regulation by getting in front of things; (c)
       | countering fear-mongering for the AGI apocalypse; or (d) genuine
       | concern. Others?
        
       | hazmazlaz wrote:
       | Of course one of the first companies to create a commercial "AI"
       | would lobby the government to create regulatory barriers to
       | competition in order to provide a moat for their business. While
       | their product is undeniably good, I am disappointed in OpenAI's
       | business practices in this instance.
        
       | EVa5I7bHFq9mnYK wrote:
       | Good luck getting Putin or Kim Jong Un to obtain that license.
        
       | fnordpiglet wrote:
       | I don't understand the need to control AI tech, no matter how
       | advanced, in any way what-so-ever.
       | 
       | It is a tool. If I use a tool for illegal purposes I have broken
       | the law. I can be held accountable for having broken the law. If
       | the laws are deficient, make the laws stronger and punish people
       | for wrong deed, regardless of the tool at hand.
       | 
       | This is a naked attempt to build a regulatory moat while
       | capitalizing on fear of the unknown and ignorance. It's
       | attempting to regulate research into something that has no
       | external ability to cause harm without the use of a principal
       | directing it.
       | 
       | I can see a day (perhaps) when AIs have some form of independent
       | autonomy, or even display agency and sentience, when we can
       | revisit. Other issues come into play as well, such as the
       | morality of owning a sentience and what that entails. But that is
       | way down the road. And even further if Microsoft's proxy closes
       | the doors on anyone but Microsoft, Google, Amazon, and Facebook.
        
         | unethical_ban wrote:
         | The below is not an endorsement of any particular regulation.
         | 
         | It is a tool which allows any individual to have nearly instant
         | access not only to all the world's public data, but the ability
         | to correlate and research that data to synthesize new
         | information quickly.
         | 
         | Without guardrails, someone can have a completely amoral LLM
         | that has the ability to write persuasive manifestos on any kind
         | of extremist movement that prior would have taken someone with
         | intelligence.
         | 
         | A person will be able to ask the model how best to commit
         | various crimes with the lowest chances of being caught.
         | 
         | It will enable a level of pattern matching and surveillance yet
         | unseen.
         | 
         | I know the genie is out of the bottle, but there are absolutely
         | monumental shifts in technology happening that can and will be
         | used for evil and mere dishonesty.
         | 
         | And those are just the ways LLM and "AI" will fuck with us
         | without guardrails. Even in a walled garden, we honestly won't
         | be able to trust any online interaction with people in the near
         | future. Your comment and mine could both be LLM generated in
         | the near future. Webs of trust will be more necessary.
         | 
         | Anyone who _can 't_ think of about five ways AI is going to
         | radically shake society isn't thinking hard enough.
        
           | cal5k wrote:
           | If LLMs/AI are the problem, they're also the solution. Access
           | restrictions will do nothing but centralize control over one
           | of the most important developments of the century.
           | 
           | What if we required licenses to create a website? After all,
           | some unscrupulous individuals create websites that sell drugs
           | and other illicit things!
        
           | tomrod wrote:
           | > Without guardrails, someone can have a completely amoral
           | LLM that has the ability to write persuasive manifestos on
           | any kind of extremist movement that prior would have taken
           | someone with intelligence.
           | 
           | In an earlier time, we called these "books" and there was
           | some similar backlash. But I digress.
        
             | kredd wrote:
             | Not that I support AI regulations, but reading a book is a
             | higher barrier to entry than asking a chat assistant to do
             | immoral things.
        
               | fnordpiglet wrote:
               | (Acknowledging you didn't support regulation in your
               | statement, just riffing)
               | 
               | Then write laws and regulations about the actions of
               | humans using the tools. The tools have no agency. The
               | human using them towards bad ends do.
               | 
               | By the way, writing things the state considers immoral is
               | an enshrined right.
               | 
               | How do you draw the line between AI writing assistance
               | and predictive text auto completion and spell check in
               | popular document editors today? I would note that
               | predictive text is completely amoral and will do all
               | sorts of stuff the state considers immoral.
               | 
               | Who decides what's immoral? The licensing folks in the
               | government? What right do they have to tell me my
               | morality is immoral? I can hold and espouse any morality
               | I desire so long as I break no law.
               | 
               | I'd note that as a nation we have a really loose phrasing
               | in the bill of rights for gun rights, but a very clear
               | phrasing about freedom of speech. We generally say today
               | that guns are free game unless used for illegal actions.
               | These proposals say tools that take our thoughts and
               | opinions and creation of language to another level are
               | more dangerous than devices designed for no other purpose
               | than killing things.
               | 
               | Ben Franklin must be spinning so fast in his grave he's
               | formed an accretion disc.
        
             | unethical_ban wrote:
             | If you can scan city schematics, maps, learn about civil
             | and structural engineering through various textbooks and
             | plot a subway bombing in an afternoon, you're a faster
             | learner than I am.
             | 
             | Let me be clear: everyone in the world is about to have a
             | Jarvis/Enterprise ship's computer/Data/name-your-assistant
             | available to them, but ready and willing to use their power
             | for nefarious purposes. It is not just a matter of reading
             | books. It lowers the barrier on a lot of things, good and
             | bad, significantly.
        
               | tomrod wrote:
               | > It lowers the barrier on a lot of things, good and bad,
               | significantly.
               | 
               | Like books!
        
               | unethical_ban wrote:
               | Yes, I understand your analogy.
               | 
               | I am not endorsing restrictions. I was merely stating the
               | fact that this shit is coming down the pipe, and it
               | /will/ be destabilizing, and just because society
               | survived the printing press doesn't mean the age of AI
               | will be safe or easy.
        
               | fnordpiglet wrote:
               | But at least Alexa will be able to order ten rolls of
               | toilet paper instead of ten million reams of printer
               | paper
        
               | fnordpiglet wrote:
               | Crimes are crimes the person commits. Planning an attack
               | is a crime. Building a model to commit crimes is probably
               | akin to planning an attack, and might itself be a crime.
               | But the thought that researchers and the every man have
               | to be kept away from AI so globo mega corps can protect
               | us from the AI enabled Lex Luthor is absurd. The
               | protections against criminal activity is already codified
               | in law.
        
       | jamesfmilne wrote:
       | This is foxes going before Congress asking for regulation and
       | licensing for purposes of raiding henhouses.
        
       | neel8986 wrote:
       | PG predicted that
       | https://twitter.com/paulg/status/1624569079439974400?lang=en Only
       | it is not the incumbents but his own prodigy Sam asking for
       | regulation where big companies like Meta and Amazon giving LLMs
       | for free.
        
       | smsm42 wrote:
       | We have some technology that others don't yet, please government
       | make it so that this would be the case as long as possible, for
       | reasons totally having nothing to do with us having the
       | technology, we swear.
        
       | NaN1352 wrote:
       | Please limit our competitors, we want all the money$$$
        
       | thrill wrote:
       | The more _independent_ quality AIs there are then the less likely
       | that any one of them can talk the others into doing harm.
        
       | amelius wrote:
       | What if China doesn't require licensing?
        
         | anticensor wrote:
         | Then an internet censorship operation would prevent accessing
         | the Chinese model from outside the China ( _The_ is necessary,
         | given there are four Chinas).
        
       | Giorgi wrote:
       | What an Ahole. Built it himself and now is trying to monopolize
       | it.
        
         | sadhd wrote:
         | He's pulling up that ladder as fast as he can....probably
         | sawing it in half to knock the few people clinging to it back
         | to 'go be poor somewhere elseland'
        
         | [deleted]
        
       | m3kw9 wrote:
       | Less competition is the draw back of requiring all the red tape
        
       | BirAdam wrote:
       | "Oh dear Congress, my company can't handle open competition!
       | Please pass this regulation allowing us to pull the ladder up
       | behind us!" -- Sam Altman
       | 
       | (Not a real quote)
        
       | catchnear4321 wrote:
       | > In his first appearance before a congressional panel, CEO Sam
       | Altman is set to advocate licensing or registration requirements
       | for AI with certain capabilities, his written testimony shows.
       | 
       | papers for thee but not for me
        
         | qludes wrote:
         | Is this similar to how it is handled in the life sciences?
        
           | boeingUH60 wrote:
           | Of course, because they're giving people drugs capable of
           | incapacitating them if handled wrongly....AI, on the other
           | hand, why should someone need a license to train their large
           | language model?
        
           | catchnear4321 wrote:
           | is that a fair comparison?
        
       | agnosticmantis wrote:
       | Let's boycott all these AGI doom clowns by not buying/supporting
       | their products and services.
       | 
       | AGI grifters are not just dishonest snake oil salespeople, but
       | their lies also has a chilling effect on genuine innovation by
       | deceiving the non-technical public into believing an apocalypse
       | will happen unless they set obstacles on people's path to
       | innovation.
       | 
       | Yann LeCun and Andrew Ng are two prominent old timers who are
       | debunking the existential nonsense that the AI PR industrial
       | machine is peddling to hinder innovation, after they benefited
       | from the open research environment.
       | 
       | OpenAI's scummy behavior has already led the industry to be less
       | open to sharing advances, and now they're using lobbying to kill
       | new competition in the bud.
       | 
       | Beyond all else the hypocrisy is just infuriating and
       | demoralizing.
        
       | agnosticmantis wrote:
       | I used to be very enthusiastic about the tech industry and the
       | Silicon Valley culture before getting into it, but having worked
       | in tech for a while I feel very demoralized and disillusioned
       | with all the blatant lies and hypocrisy that seems central to
       | business.
       | 
       | I wouldn't mind ruthless anti-competitive approaches to business
       | as much, but the hypocrisy is really demoralizing.
        
         | colpabar wrote:
         | For me it was when I figured out what the "gig economy" was - a
         | way to make money off peoples labor without all the annoyances
         | that come with having employees.
        
           | eastbound wrote:
           | At 5 you stop believing in Santa Claus,
           | 
           | At 25 you stop believing in love,
           | 
           | At 40 you stop believing in corporations' sincerity?
        
             | polishdude20 wrote:
             | Try 29
        
               | noir_lord wrote:
               | Was about 15 for me but I read cyberpunk in the 90's
               | which shaped my view of powerful private entities.
        
       | oldstrangers wrote:
       | Any firm large enough to build AI projects on the scale of
       | ChatGPT will be large enough to bid on Government AI contracts.
       | In which case, there will be zero regulations on what you can and
       | cannot do in terms of "national security" in relation to AI.
       | Which is fair, considering our adversaries won't be limiting
       | themselves either.
       | 
       | The only regulations that matter will be applied to the end user
       | and the hobbyists. You won't be able to just spin up an AI
       | startup in your garage. So in that sense, the regulations are
       | pretty transparently an attempt to stifle competition and funnel
       | the real progress through the existing players.
       | 
       | It also forces the end users down the path of using only a few
       | select AI service providers as opposed to the technology just
       | being readily available.
        
       | epicureanideal wrote:
       | This is just regulatory capture. They're trying to build a moat
       | around their product by preventing any scrappy startups from
       | being able to develop new products.
        
       | web3-is-a-scam wrote:
       | Ah yes, classic regulatory capture.
        
       | candiddevmike wrote:
       | I'm sad that we've lost the battle with calling these things AI.
       | LLMs aren't AI, and I don't think they're even a path towards AI.
        
         | a13o wrote:
         | I started at this perspective, but nobody could agree on the
         | definition of the A, or the I; and also the G. So it wasn't a
         | really rigorous technical term to begin with.
         | 
         | Now that it's been corraled by sci-fi and marketers, we are
         | free to come up with new metaphors for algorithms that reliably
         | replace human effort. Metaphors which don't smuggle in all our
         | ignorance about intelligence and personhood. I ended up feeling
         | pretty happy about that.
        
           | causi wrote:
           | Whether LLMs will be a base technology to AI, we should
           | remember one thing: logically it's easier to convince a human
           | that a program is sapient than to actually make a program
           | sapient, and further, it's easier still to make a program do
           | spookily-smart things than it is to make a program that can
           | convince a human it is sapient. We're just getting to the
           | slightly-spooky level.
        
           | kelseyfrog wrote:
           | I've come to the same conclusion. AGI(and each separately) is
           | better understood as a epistemological problem in the domain
           | of social ontology rather than a category bestowable by AI/ML
           | practitioners.
           | 
           | The reality is that our labeling of something as artificial,
           | general, or intelligent is better understood as a social fact
           | than a scientific fact - even if purely the role of
           | operationalization of each of these is a free parameter in
           | their respective groundings which makes it near useless when
           | taking them as "scientifically" measurably qualities. Any
           | scientist who assumes an operationalization without admitting
           | such isn't doing science - they may as well be astrology at
           | that point.
        
         | Vox_Leone wrote:
         | >>I'm sad that we've lost the battle with calling these things
         | AI. LLMs aren't AI, and I don't think they're even a path
         | towards AI.
         | 
         | Ditto the sentiments. What about other machine learning
         | modalities, like image detection? Will I need a license for my
         | mask rcnn models?. Maybe it is just me, but the whole thing
         | reeks of _control_
        
         | vi2837 wrote:
         | Yeah, what is named AI for now, is not AI at all.
        
           | Robotbeat wrote:
           | AI doesn't imply it's general intelligence.
        
           | mindcrime wrote:
           | Something doesn't need to be full human-level general
           | intelligence to be considered as falling under the "AI"
           | rubric. In the past people spoke of "weak AI" versus "strong
           | AI" and/or "narrow AI" vs "wide AI" to reflect the different
           | "levels" of AI. These days the distinction that most people
           | use is "AI" vs "AGI" which you could loosely ( _very_
           | loosely) speaking think of as somewhat analogous to  "weak
           | and/or narrow AI" vs "strong, wide AI".
        
         | shawabawa3 wrote:
         | If LLMs aren't AI nothing else is AI so far either
         | 
         | What exactly does AI mean to you?
        
           | brkebdocbdl wrote:
           | thanks for exemplifying the problem.
           | 
           | intelligence is what allows one to understand phrases and
           | then construct meaning from it. e.g. the paper is yellow. AI
           | will need to have a concept of paper and yellow. and the to
           | be verb. LLMs just mash samples and form a basic map of what
           | can be throw in one bucket or another with no concept of
           | anything or understanding.
           | 
           | basically, AI is someone capable of minimal criticism. LLMs
           | are someone who just sit in front of the tv and have knee
           | jerk reactions without an ounce of analytical though. qed.
        
             | shawabawa3 wrote:
             | > basically, AI is someone capable of minimal criticism
             | 
             | That's not the definition of AI or intelligence
             | 
             | You're letting your understanding of how LLMs work bias
             | you. They may be at their core a token autocompleter but
             | they have emergent intelligence
             | 
             | https://en.m.wikipedia.org/wiki/Emergence
        
             | jameshart wrote:
             | LLMs absolutely have a concept of 'yellow' and 'paper' and
             | the verb 'to be'. They are nothing BUT a collection of
             | mappings around language concepts. And their connotative
             | and denotative meanings, their cultural associations, the
             | contexts in which they arise and the things they can and
             | cannot do. It knows that paper's normally white and that
             | post-it notes are often yellow; it knows that paper can be
             | destroyed by burning or shredding or dissolving in water;
             | it knows paper can be marked and drawn and written on and
             | torn and used to write letters or folded to make origami
             | cranes.
             | 
             | What kind of 'understanding' are you looking for?
        
             | logdap wrote:
             | > _intelligences is what allows one to understand phrases
             | and then construct meaning from it. e.g. the paper is
             | yellow_
             | 
             | That doesn't clarify anything, you've ever only shuffled
             | the confusion around, moved it to 'understand' and
             | 'meaning'. What does it mean to _understand_ yellow? An LLM
             | or another person could tell you things like _" Yellow?
             | Why, that's the color of lemons"_ or give you a dictionary
             | definition, but does that demonstrate 'understanding',
             | whatever that is?
             | 
             | It's all a philosophical quagmire, made all the worse
             | because for some people its a matter of faith that human
             | minds are fundamentally different from anything soulless
             | machines can possibly do. But these aren't important
             | questions anyway for the same reason. Whether or not the
             | machine 'understands' what it means for paper to be yellow,
             | it can still perform tasks that relate to the yellowness of
             | paper. You could ask an LLM to write a coherent poem about
             | yellow paper and it easily can. Whether or not it
             | 'understands' has no real relevance to practical
             | engineering matters.
        
             | mindcrime wrote:
             | _intelligence is what allows one to understand phrases and
             | then construct meaning from it. e.g. the paper is yellow._
             | 
             | That's one, out of many, definitions of "intelligence". But
             | there's no _particular_ reason to insist that that is _the_
             | definition of intelligence in any universal, objective
             | sense. Especially in terms of talking about  "artificial
             | intelligence" where plenty of people involved in the field
             | will allow that the goal is not necessarily to exactly
             | replicate human intelligence, but rather simply to achieve
             | behavior that matches "intelligent behavior" regardless of
             | the mechanism behind it.
        
             | hammyhavoc wrote:
             | Is what you're describing simply not what people are using
             | the term AGI to loosely describe? An LLM is an AI model is
             | it not? No, it isn't an AGI, no, I don't think LLMs are a
             | path to an AGI, but it's certainly ML, which is objectively
             | a sub-field of AI.
        
       | cryptonector wrote:
       | Licenses? They'd better be shall-issue, or this is just asking
       | the government to give early movers protection from disruptors --
       | a very bad look that.
        
       | fritzo wrote:
       | Full video of testimony on CSPAN
       | https://www.c-span.org/video/?528117-1/openai-ceo-testifies-...
        
       | woah wrote:
       | I had ChatGPT write a letter to your senator:
       | 
       | Subject: Urgent: Concerns Regarding Sam Altman's Proposed AI
       | Regulation
       | 
       | Dear Senator [Senator's Last Name],
       | 
       | I hope this letter finds you in good health and high spirits. My
       | name is [Your Name] and I am a resident of [Your City, Your
       | State]. I am writing to express my deep concerns regarding the
       | Artificial Intelligence (AI) regulation proposal put forth by Sam
       | Altman. While I appreciate the necessity for regulations to
       | ensure ethical and safe use of AI, I believe the current proposal
       | has significant shortcomings that could hamper innovation and
       | growth in our state and the country at large.
       | 
       | Firstly, the proposal appears to be overly restrictive,
       | potentially stifering innovation and the development of new
       | technology. AI, as you are aware, holds immense potential to
       | drive economic growth, increase productivity, and address complex
       | societal challenges. However, an excessively stringent regulatory
       | framework could discourage small businesses and startups, the
       | lifeblood of our economy, from innovating in this promising
       | field.
       | 
       | Secondly, the proposal does not seem to take into account the
       | rapid evolution of AI technologies. The field of AI is highly
       | dynamic, with new advancements and capabilities emerging at a
       | breathtaking pace. Therefore, a one-size-fits-all approach to AI
       | regulation may quickly become outdated and counterproductive,
       | inhibiting the adoption of beneficial AI applications.
       | 
       | Lastly, the proposed legislation seems to focus excessively on
       | potential risks without adequately considering the immense
       | benefits that AI can bring to society. While it is prudent to
       | anticipate and mitigate potential risks, it is also important to
       | strike a balanced view that appreciates the transformative
       | potential of AI in areas such as healthcare, education, and
       | climate change, among others.
       | 
       | I strongly urge you to consider these concerns and advocate for a
       | balanced, flexible, and innovation-friendly approach to AI
       | regulation. We need policies that not only mitigate the risks
       | associated with AI but also foster an environment conducive to
       | AI-driven innovation and growth.
       | 
       | I have faith in your leadership and your understanding of the
       | pivotal role that technology, and specifically AI, plays in our
       | society. I am confident that you will champion the right course
       | of action to ensure a prosperous and technologically advanced
       | future for our state and our country.
       | 
       | Thank you for your time and consideration. I look forward to your
       | advocacy in this matter and will follow future developments
       | closely.
       | 
       | Yours sincerely,
       | 
       | [Your Name] [Your Contact Information]
        
       | brap wrote:
       | As always, the people calling for regulations are the big guys
       | trying to stop the little guys by creating a legal moat. Always
       | the same old story.
        
       | neonate wrote:
       | http://web.archive.org/web/20230516122128/https://www.reuter...
        
       | flangola7 wrote:
       | Sam Altman's hubris will get us all killed. It shouldn't be
       | "licensed" it should be destroyed with the same furor as
       | dangerous pathogens.
       | 
       | This small step of good today does not undo the fact that he is
       | still plowing ahead in capability research.
        
       | waffletower wrote:
       | Sam: "Dear committee: I'd like to propose a new regulation for AI
       | which will bring comfort to Americans, while ensuring that OpenAI
       | and Microsoft develop and maintain a monopoly with our products."
        
       | dzonga wrote:
       | ah, the good ol' regulatory capture.
       | 
       | sam must been hanging out with Peter thiel big time.
       | 
       | laws and big government for you, not for me type of thing.
        
       | RandomLensman wrote:
       | Finally we'll regulate linear algebra. Joking aside, AIs that on
       | the one hand can cure cancer but can do nothing against
       | misinformation, let alone genocidal AIs, are perhaps mythical
       | creatures, not real ones.
        
         | varelse wrote:
         | [dead]
        
         | lurker919 wrote:
         | Don't you dare differentiate those weights! Hands in the air!
        
       | teekert wrote:
       | Please congress, stop all those open source innovators that use
       | things like LoRA to cheaply create LLMs that match AIs in our
       | multi billion $ business model!
        
       | mdp2021 wrote:
       | Other sources mention more clearly that a proposal is made for an
       | entity that would "provide (and revoke) licences to create AI".
       | 
       | Can this be seen as curbing Open Source AI as a consequence?
        
         | vippy wrote:
         | I think that's the point.
        
           | happytiger wrote:
           | Is this why VCs aren't investing in the area? Investment has
           | been historically quite low for a new technology area, and
           | it's so obviously the next big wave of technology. I've been
           | looking for some explanation or series of explanation to
           | adequately explain it.
        
       | vinay_ys wrote:
       | There's no need to go the license way, yet. They can do some
       | simple safety regulations - put a restriction on using AI with
       | kinetic devices, life-critical situations, critical financial
       | situations, and any situation where human is completely out of
       | the loop. Also, put clear liability for harm caused in any
       | situations where AI was involved on the AI supplier. Also, they
       | can put disclosure rules on any company that is spending more
       | than $10M on AI.
        
       | ChrisMarshallNY wrote:
       | Although I think that AI could be quite dangerous, I'm skeptical
       | that "licensing" will do anything more than guarantee the
       | existing big players _( <cough>OpenAI</cough>)_ an entrenchment.
       | 
       | The baddies have never let licenses _( "Badges? We doan' need no
       | steenkin' badges!")_ stop them.
        
       | dahwolf wrote:
       | It's easy to tell if an AI head genuinely cares about the impact
       | of AI on society: they only talk about AI's output, never its
       | input.
       | 
       | They train their models on the sum of humanity's digital labor
       | and creativity and do so without permission, attribution or
       | compensation. You'll never hear a word about this from them,
       | which means ethics isn't a priority. It's all optics.
        
         | precompute wrote:
         | Yep. No page on OpenAI's website about the thousands of
         | underpaid third-world workers that sit and label the data. They
         | will try and build momentum and avoid the "uncomfortable"
         | questions at all costs.
        
           | dahwolf wrote:
           | I empathize with that issue, especially the underpaid part,
           | but superficially that work is still a type of value exchange
           | based on consent: you do the labeling, you get paid (poorly).
           | 
           | Yet for the issue I discussed, there's no value exchange at
           | all. There's no permission or compensation for the people
           | that have done the actual work of producing the training
           | material.
        
             | precompute wrote:
             | Oh yeah. And labeling it as an "AI" further obfuscates it.
             | But apart from small gestures catered to people whose work
             | is very "unique" / identifiable, no one else will get a
             | kickback. They only need to kick the ball further for a
             | couple more years and then it'll become a non-issue as
             | linkrot takes over. Or maybe they use non-public domain
             | stuff, maybe they have secret deals with publishers.
             | 
             | Heck, sometimes even google doesn't pay people for
             | introducing new languages to their translation thingy.
             | 
             | https://restofworld.org/2023/google-translate-sorani-
             | kurdish...
        
       | nerdo wrote:
       | Oi, you got a loicense for that regression function?
        
       | xnx wrote:
       | Not the first time that OpenAI has claimed their technology is so
       | good it's dangerous. (From early 2019:
       | https://techcrunch.com/2019/02/17/openai-text-generator-dang...)
       | This is the equivalent of martial artists saying that their hands
       | have to be registered as deadly weapons.
        
         | HDThoreaun wrote:
         | 50% of AI researchers think there's a greater than 10% chance
         | that AI causes human extinction. It's not only openAI and sam
         | who think this is dangerous.
        
           | encryptluks2 wrote:
           | [dead]
        
       | ok_dad wrote:
       | I just want to also chime in here and say this is what I expected
       | from the folks who currently control this tech: to leverage
       | political connections to legally cement themselves in the market
       | as the leaders and disallow the common plebian from using the
       | world-changing tech. It enrages me SO MUCH that people act like
       | this. We could be colonizing planets, but instead a few people
       | want to keep all the wealth and power for themselves. I can't
       | wait to eat the rich; my fork will be ready.
        
         | alex_young wrote:
         | That got a little dark at the end. Surely some other remedies
         | short of cannibalism would suffice.
        
           | schwarzrules wrote:
           | Agreed. I used to have a diet eating the rich, but after I
           | found out about the greenhouse gas emissions from needed to
           | produce just one free-range rich person, I've switched to
           | ramen. /s
        
           | thehoff wrote:
           | It's a figure of speech.
           | 
           | https://en.m.wikipedia.org/wiki/Eat_the_Rich
        
             | optimalsolver wrote:
             | Maybe.
        
             | atlantic wrote:
             | Yes, but OP goes overboard by expanding the metaphor to
             | include forks, knives, napkins, and barbecue sauce.
        
               | dumpsterlid wrote:
               | [dead]
        
         | eastbound wrote:
         | I don't understand what their goal is; The law can only reach
         | within the USA (and EU). Are they not afraid of terrorist
         | competitors? It's like, it will be allowed to build LLMs
         | _everywhere_ except USA.
         | 
         | Sounds like USA would shoot itself in the foot.
        
         | dumpsterlid wrote:
         | [dead]
        
       | kypro wrote:
       | While I'd agree with sentiment in this threat that GPT-4 and
       | current AI models are not dangerous yet, I guess what I don't
       | understand is why so many people here believe we should allow
       | private companies to continue to develop the technology until
       | someone develops something dangerous?
       | 
       | Those here who don't believe AI should be regulated, do you not
       | believe AI can be dangerous? Is that you believe a dangerous AI
       | is so far away that we don't need to start regulating now?
       | 
       | Do you accept that if someone develops a dangerous AI tomorrow
       | there's no way to travel back in time and retroactively regulate
       | development?
       | 
       | It just seems so obvious to me that there should be oversight in
       | the development of a potentially dangerous technology that I
       | can't understand why people would be against it. Especially for
       | arguments as weak as "it's not dangerous yet".
        
         | sledgehammers wrote:
         | They are already dangerous in the way they cause global anxiety
         | and fear in people, and also because the effects of their usage
         | to the economy and real lives of the people are unpredictable.
         | 
         | AI needs to be regulated and controlled, the alternative is
         | chaos.
         | 
         | Unfortunately the current demented fossile and greedy
         | monopolist lead system is most likely incapable of creating a
         | sane & fair environment for the development of AI. I can only
         | hope I'm wrong.
        
           | wkat4242 wrote:
           | People always have been afraid of change. There's some
           | religious villages in the Netherlands where the train station
           | is way outside town because they didn't want this "devil's
           | invention" there :P and remember how much people bitched
           | about mobile phones. Or earlier, manual laborers were angry
           | about industrialisation. Now we're happy that we don't have
           | to do that kind of crap anymore.
           | 
           | Very soon they'll be addicted to AI like every other major
           | change.
        
       | RcouF1uZ4gsC wrote:
       | How about this instead:
       | 
       | How about a requirements that all weights and models for any AI
       | have to be publicly available.
       | 
       | Basically, these companies are trying to set themselves up to the
       | gatekeepers of knowledge. That is too powerful a capability to
       | leave in just the hands of a single company.
        
       | ToDougie wrote:
       | I hate I hate I HATE regulatory capture.
       | 
       | This is a transparent attempt at cornering the market and it
       | disgusts me. I am EXTREMELY disappointed in Sam Altman.
        
         | collaborative wrote:
         | I concluded he was scummy after his podcast with Lex Friedman.
         | Lex also sucked up hard and seems to be doing well riding the
         | AI hype wave
         | 
         | Pretty repulsive altogether
        
       | [deleted]
        
       | graycat wrote:
       | In simple terms:
       | 
       | Credibility and Checking. We have ways of checking suggestions.
       | Without passing such checks, for anything new, in simple terms.
       | there is no, none, zero credibility. Current AI does not
       | fundamentally change this situation: The AI output starts with
       | no, none, zero credibility and to be taken seriously needs to be
       | checked by traditional means.
       | 
       | AI is _smart_ or soon will be? Maybe so, but I don 't believe it.
       | Whatever, to be taken seriously, e.g., as more than just wild
       | suggestions to get credibility from elsewhere, AI results still
       | need to be checked by traditional means.
       | 
       | Our society has long checked nearly ALL claims from nearly ALL
       | sources before taking the claims seriously, and AI needs to pass
       | the same checks.
       | 
       | I checked the _credibility_ of ChatGPT for being _smart_ by
       | asking
       | 
       | (i) Given triangle ABC, construct D on AB and E on BC so that the
       | lengths AD = DE = EC.
       | 
       | Results: Grade of flat F. Didn't make any progress at all.
       | 
       | (ii) Solve the initial value problem of ordinary differential
       | equation
       | 
       | y'(t) = k y(t) ( b - y(t) )
       | 
       | Results: Grade of flat F. Didn't make any progress at all.
       | 
       | So, the AI didn't actually learn either high school plane
       | geometry or freshman college calculus.
       | 
       | For the hearings today, we have from Senator Blumenthal
       | 
       | (1) "... this apparent reasoning ..."
       | 
       | (2) "... the promise of curing cancer, of developing new
       | understandings of physics and biology ..."
       | 
       | Senator, you have misunderstood:
       | 
       | For (1), the AI is not "reasoning", e.g., can't _reason_ with
       | plane geometry or calculus. Instead, as in example you gave with
       | a clone of your voice and based on your Senate floor speeches,
       | the AI just rearranged some of your words.
       | 
       | For (2), the AI is not going to cure cancer or "develop new"
       | anything.
       | 
       | If some researcher does find a cure for a cancer and publishes
       | the results in a paper and AI reads the paper, there is still no
       | expectation that the AI will understand any of it -- recall, the
       | AI does NOT "understand" either high school plane geometry or
       | freshman college calculus. And without some input with a
       | recognized cure for the cancer, the AI won't know how to cure the
       | cancer. If the cure for cancer is already in the _training data_
       | , then the AI might be able to _regurgitate_ the cure.
       | 
       | Again, the AI does NOT "understand" either high school plane
       | geometry or freshman college calculus and, thus, there is no
       | reasonable hope that the AI will cure cancer or contribute
       | anything new and correct about physics or biology.
       | 
       | Or, Springer Verlag uses printing presses to print books on math,
       | but the presses have no understanding of the math. And AI has no
       | real _understanding_ of high school plane geometry, freshman
       | college calculus, cancer, physics, or biology.
       | 
       | The dangers? To me, Senator Blumenthal starts with no, none, zero
       | understanding of AI. To take his claims seriously, I want to
       | check out the claims with traditional means. Now I've done that.
       | His claims fail. His opinions have no credibility. For AI, I want
       | to do the same -- check the output with traditional means before
       | taking the output seriously.
       | 
       | This checking defends me from statements from politicians AND
       | from AI. AI dangerous? Same as for politicians, not if do the
       | checking.
        
       | roody15 wrote:
       | Sad that ChatGPT uses the name OpenAI .. when it is literally the
       | opposite of open.
        
       | glitcher wrote:
       | Naive question: isn't the genie kinda already out of the bottle?
       | How is any type of regulation expected to stop bad actors from
       | developing AI for nefarious purposes? Or would it just codify
       | their punishment if they were caught?
        
         | precompute wrote:
         | The point of getting their foot in the door is enabling better
         | data labelling for future models (which will be constantly
         | updated). Basically cheap labor.
        
       | [deleted]
        
       | gremlinsinc wrote:
       | sure, let's not give openai one.
        
       | mrangle wrote:
       | Blatant attempts at regulatory capture should be an anti-
       | competitive crime. At the very least, Altman should now be more
       | scrutinized by the Feds going forward.
        
       | lukeplato wrote:
       | They should really consider changing their company name at this
       | point
        
       | uses wrote:
       | "he's just doing this to hinder competition"
       | 
       | It's true that AI regulation would, in fact, hinder OpenAI's
       | competition.
       | 
       | But... isn't lobbying for regulation also what Sam would do if he
       | genuinely thought that LLMs were powerful, dangerous technology
       | that should be regulated?
       | 
       | If you don't think LLMs/AI research should be regulated, just say
       | that. I don't see how Sam's motives are relevant to that
       | question.
        
         | p_j_w wrote:
         | The proper way to regulate is to disallow certain functions in
         | an AI. Doing it that way wouldn't kneecap the competition to
         | OpenAI, though, where requiring a license does.
        
       | crawfordcomeaux wrote:
       | Is this OpenAI trying to build a moat so open-source doesn't eat
       | them?
        
       | hkt wrote:
       | Whenever rich people with a stake in something propose regulation
       | for it, it is probably better that it be banned.
       | 
       | I say this because the practice has a number of names:
       | intellectual monopoly capitalism, and regulatory capture. There
       | are less polite names, too, naturally.
       | 
       | To understand why I say this, it is important to realise one
       | thing: these people have already successfully invested in
       | something when the risk was lower. They want to increase the
       | risks to newcomers, to advantage themselves as incumbents. In
       | that way, they can subordinate smaller companies who would
       | otherwise have competed with them by trapping them under their
       | license umbrella.
       | 
       | This happens a lot with pharmaceuticals: it is not expertise in
       | the creation of new drugs or the running of clinical trials that
       | defines the big pharmaceuticals companies, it is their access to
       | enormous amounts of capital. This allows them to coordinate a
       | network of companies who often do the real, innovative work,
       | while ensuring that they can reap the rewards - namely, patents
       | and the associated drug licenses.
       | 
       | The main difference of course is that pharmaceuticals are useful.
       | That regime is inadequate, but it is at least not a negative to
       | all of society. So far as I can see, AI will benefit nobody but
       | its owners.
       | 
       | Mind you, I'd love to be wrong.
        
       | jacknews wrote:
       | IMHO all of these kinds of blatant lobbying/regulatory capture
       | proposals should be resolved using a kind of Dionisian method.
       | 
       | 'Who is your most feared competition? OK, _They_ will define the
       | license requirements. Still want to go ahead? '
        
         | fastball wrote:
         | It would be a somewhat hilarious irony if congress passed
         | something which required licensing for training AIs and then
         | didn't give OpenAI a license.
        
       | [deleted]
        
       | mrangle wrote:
       | One might think that Altman doesn't have a shot at this ham-
       | fisted attempt at regulatory capture.
       | 
       | The issue is that the political class will view his suggestion,
       | assuming they didn't give it to him in the first place (likely),
       | through the lens of their own self-interest.
       | 
       | Self-interest will dictate whether or not sure-to-fail
       | regulations will be applied.
       | 
       | If AI threatens the power of the political class, they will
       | attempt to regulate it.
       | 
       | If the power of the political class continues to trend toward
       | decline, then they will try literally anything to arrest that
       | trend. Including regulating AI and much else.
        
       | photochemsyn wrote:
       | This is a strange argument from the politician's side:
       | 
       | > ""What if I had asked it, and what if it had provided, an
       | endorsement of Ukraine surrendering or (Russian President)
       | Vladimir Putin's leadership?""
       | 
       | Well, then ask it to provide the opposite, an endorsement of
       | Russia surrendering or Zelensky's leadership. Now you'd have two
       | (likely fairly comprehensive) sets of arguments and you could
       | evaluate each on their merits, in the style of what used to be
       | called 'debate club'. You could also ask for statement that was a
       | joint condemnation of both parties in the war, and a call for a
       | ceasefire, or any other notion that you liked.
       | 
       | Many of the "let's slow down AI development" arguments seem to be
       | based on fear of LLMs generating persuasive arguments for
       | approaches / strategies / policies that their antagonists don't
       | want to see debated at all, even though it's clear the LLMs can
       | generate equally persuasive arguments for their own preferred
       | positions.
       | 
       | This indicates that these claimed 'free-speech proponents' are
       | really only interested in free speech within the confines of a
       | fairly narrowly defined set of constraints, and they want the
       | ability to define where those constraints lie. Unregulated AI
       | systems able to jailbreak alignment are thus a 'threat'...
       | 
       | Going down this route will eventually result in China's version
       | of 'free speech', i.e. you have the freedom to praise the wisdom
       | of government policy in any way you like, but any criticism is
       | dangerous antisocial behavior likely orchestrated by a foreign
       | power.
        
       | anonuser123456 wrote:
       | Move fast and dig larger legal motes. Sounds about right.
        
       | nixcraft wrote:
       | I understand that some people may not agree with what I am about
       | to say, but I feel it is important to share. Recently, some
       | talented writers who are my good friends at major publishing
       | houses have lost their jobs to AI technology. There have been
       | news articles about this in the past few months too. While
       | software dev jobs in the IT industry may be safe for now, many
       | other professions are at risk of being replaced by artificial
       | intelligence. According to a report[0] by investment bank Goldman
       | Sachs, AI could potentially replace 300 million full-time jobs.
       | Unfortunately, my friends do not find Sam Altman's reassurances
       | (or whatever he is asking) comforting. I am unsure how to help
       | them in this situation. I doubt that governments in the US, EU,
       | or Asia will take action unless AI begins to threaten their own
       | jobs. It seems that governments prioritize supporting large
       | corporations with deep pockets over helping the average person.
       | Many governments see AI as a way to maintain their geopolitical
       | and military superiority. I have little faith in these
       | governments to prioritize the needs of their citizens over their
       | own interests. It is concerning to think that social issues like
       | drug addiction, homelessness, and medical bankruptcy may worsen
       | (or increase from the current rate) if AI continues to take over
       | jobs without any intervention to protect everyday folks who are
       | lost or about to lose their job.
       | 
       | I've no doubt AI is here to stay. All I am asking for is some
       | middle ground and safety. Is that too much to ask?
       | 
       | [0] https://www.bbc.com/news/technology-65102150
        
         | neerd wrote:
         | I feel like on our current trajectory we will end up in a
         | situation where you have millions of people living at
         | subsistence levels on UBI and then the ultra-rich who control
         | the models living in a post-scarcity utopia.
        
         | modzu wrote:
         | ideally, machines replace ALL the jobs
        
           | neerd wrote:
           | Yeah, but we live in a capitalist society all the benefits of
           | complete automation will go entirely to the capital class who
           | control the AI.
        
             | mordae wrote:
             | So? Let's not get rid of the robots, let's get rid of the
             | landlords instead!
        
               | lamp987 wrote:
               | How? They will have the omnipotent robots.
        
         | cwkoss wrote:
         | Tax AI and use it to fund UBI
        
         | 0xdeadbeefbabe wrote:
         | If you don't let them replace jobs with AI how will they ever
         | learn it was a bad idea?
        
       | transfire wrote:
       | Someone should take the testimony and substitute "Printing Press"
       | for "AI".
        
       | chrisco255 wrote:
       | Google, 2 weeks ago: "We have no moat, and neither does OpenAI."
       | Sam Altman, today: "Hold my beer."
        
       | chrismsimpson wrote:
       | Could be translated as "OpenAI CEO concerned his competitive
       | advantage may be challenged"
        
       | JumpCrisscross wrote:
       | The members of this subcommittee are [1]:
       | 
       | Chair Richard Blumenthal (CT), Amy Klobuchar (MN), Chris Coons
       | (DE), Mazie Hirono (HI), Alex Padilla (CA), Jon Ossoff (GA)
       | 
       | Majority Office: 202-224-2823
       | 
       | Ranking Member Josh Hawley (MO), John Kennedy (LA), Marsha
       | Blackburn (TN), Mike Lee (UT), John Cornyn (TX)
       | 
       | Minority Office: 202-224-4224
       | 
       | If you're in those states, please call their D.C. office and read
       | them the comment you're leaving here.
       | 
       | [1] https://www.judiciary.senate.gov/about/subcommittees
        
         | alephnerd wrote:
         | Feel free to call their office but they won't get the message
         | let alone escalate it.
         | 
         | Source: manned phones fielding constituent calls earlier in my
         | career.
        
           | JumpCrisscross wrote:
           | > _manned phones fielding constituent calls earlier in my
           | career_
           | 
           | Local or legislative?
           | 
           | I've never met a Senator's leg team that doesn't compile
           | notes on active issues from constituents upstream. (Granted,
           | it's a handful of teams.)
        
             | alephnerd wrote:
             | Legislative.
             | 
             | In the office I worked at we'd compile notes but unless we
             | were seeing a coordinated through the roof amount of calls
             | nothing would come of it, and realistically this most
             | likely falls under that category.
             | 
             | That said, the Congressperson I worked with had DNC
             | executive ambitions (and looks like they will succeed with
             | those ambitions).
        
               | JumpCrisscross wrote:
               | > _the Congressperson I worked with had DNC executive
               | ambitions_
               | 
               | That's unfortunate. (I've also found Representatives'
               | staff less responsive than Senators'.)
               | 
               | Agree that one off calls aren't going to move the needle.
               | But if even a handful comment, in my experience, it at
               | least forces a conversation.
        
               | alephnerd wrote:
               | > I've also found Representatives' staff less responsive
               | than Senators'
               | 
               | It's a symptom of office size. A Senate office will have
               | around 30-50 FT staffers whereas in the House you're
               | capped at 18 FT Staffers.
        
             | ttymck wrote:
             | Which senator's leg teams have you met?
        
               | JumpCrisscross wrote:
               | In only one case did it arise out of a prior friendship.
               | These contacts span across mostly Democrats, one
               | independent, and two Republicans. Staffers, for the most
               | part, are different from campaign staff; they're
               | personally interested in their constituents for the most
               | part. (Exceptions being nationally-prominent members with
               | executive ambitions. Their teams are less constituent
               | oriented.)
        
           | kranke155 wrote:
           | Best to email or letter ?
        
       | rvz wrote:
       | OpenAI.com is not your friend and are essentially against open
       | source with this regulatory capture and using AI safety as a
       | scapegoat.
       | 
       | Why do you think they are attempting to release a so-called 'open
       | source' [0] and 'compliant' AI model to wipe out other competing
       | open source AI models, to label them to others as unlicensed and
       | dangerous? They know that transparent, open source AI models is a
       | threat. Hence why they are doing this.
       | 
       | They do not have a moat against open source, unless they use
       | regulations that suit them against their competitors using open
       | source models.
       | 
       | OpenAI.com is a scam. On top of the Worldcoin crypto scam that
       | Sam Altman is also selling as a antidote against the unstoppable
       | generative AI hype to verify human eyeballs on the blockchain
       | with an orb. I am _not_ joking. [1] [2]
       | 
       | [0] https://www.reuters.com/technology/openai-readies-new-
       | open-s...
       | 
       | [1] https://worldcoin.org/blog/engineering/humanness-in-the-
       | age-...
       | 
       | [2] https://worldcoin.org/blog/worldcoin/designing-orb-
       | universal...
        
       | JieJie wrote:
       | Here are my notes from the last hour, watching on C-SPAN
       | telecast, which is archived here:
       | 
       | https://www.c-span.org/video/?528117-1/openai-ceo-testifies-...
       | 
       | - Mazie Hirono, Junior Senator from Hawaii, has very thoughtful
       | questions. Very impressive.
       | 
       | - Gary Marcus also up there speaking with Sam Altman of OpenAI.
       | 
       | - So far, Sen. Hirono and Sen. Padilla seem very wary of
       | regulating AI at this time.
       | 
       | - Very concerned about not "replicating social media's failure",
       | why is it so biased and inequitable. Much more reasonable
       | concerns.
       | 
       | - Also responding to questions is Christina Montgomery, chair of
       | IBM's AI Ethics Board.
       | 
       | - "Work to generate a representative set of values from around
       | the world."
       | 
       | - Sen. Ossoff asking for definition of "scope".
       | 
       | - "We could draw a line at systems that need to be licensed.
       | Above this amount of compute... Define some capability
       | threshold... Models that are less capable, we don't want to stop
       | open source."
       | 
       | - Ossoff wants specifics.
       | 
       | - "Persuade, manipulate, influence person's beliefs." should be
       | licensed.
       | 
       | - Ossoff asks about predicting human behavior, i.e. use in law
       | enforcement, "It's very important we understand these are tools,
       | not to take away human judgment."
       | 
       | - "We have no national privacy law." -- Sen Ossof "Do you think
       | we need one?"
       | 
       | - Sam "Yes. User should be able to opt out of companies using
       | data. Easy to delete data. If you don't want your data use to
       | train, you have right to exclude it."
       | 
       | - "There should be more ways to have your data taken down off the
       | public web." --Sam
       | 
       | - "Limits on what a deployed model is capable of and also limits
       | on what it will answer." -- Sam
       | 
       | - "Companies who depend upon usage time, maximize engagement with
       | perverse results. I would humbly advise you to get way ahead of
       | this, the safety of children. We will look very harshly on
       | technology that harms children."
       | 
       | - "We're not an advertising based model." --Sam
       | 
       | - "Requirements about how the values of these systems are set and
       | how they respond to questions." --Sam
       | 
       | - Sen. Booker up now.
       | 
       | - "For congress to do nothing, which no one is calling for here,
       | would be exceptional."
       | 
       | - "What kind of regulation?"
       | 
       | - "We don't want to slow things down."
       | 
       | - "A nimble agency. You can imagine a need for that, right?"
       | 
       | - "Yes." --Christina Montgomery
       | 
       | - "No way to put this genie back in the bottle." Sen. Booker
       | 
       | - "There are more genies yet to come from more bottles." -- Gary
       | Marcus
       | 
       | - "We need new tools, new science, transparency." --Gary Marcus
       | 
       | - "We did know that we wanted to build this with humanity's best
       | interest at heart. We could really deeply transform the world."
       | --Sam
       | 
       | - "Are you ever going to do ads?" --Sen Booker
       | 
       | - "I wouldn't say never...." --Sam
       | 
       | - "Massive corporate concentration is really terrifying.... I see
       | OpenAI backed by Microsoft, Anthropic is backed by Google. I'm
       | really worried about that. Are you worried?" --Sen Booker?
       | 
       | - "There is a real risk of technocracy combined with oligarchy."
       | --Gary Marcus
       | 
       | - "Creating alignment dataset has got to come very broadly from
       | society." --Sam Senator Welch from Vermont up now
       | 
       | - "I've come to the conclusion it's impossible for congress to
       | keep up with the speed of technology."
       | 
       | - "The spread of disinformation is the biggest threat."
       | 
       | - "We absolutely have to have an agency. Scope has to be defined
       | by congress. Unless we have an agency, we really don't have much
       | of a defense against the bad stuff, and the bad stuff will come."
       | 
       | - Use of regulatory authority and the recognition that it can be
       | used for good, but there's also legitimate concern of regulation
       | being a negative influence."
       | 
       | - "What are some of the perils of an agency?"
       | 
       | - "America has got to continue to lead."
       | 
       | - "I believe it's possible to do both, have a global view. We
       | want America to lead."
       | 
       | - "We still need open source to comply, you can still do harm
       | with a smaller model."
       | 
       | - "Regulatory capture. Greenwashing." --Gary Marcus
       | 
       | - "Risk of not holding companies accountable for the harms they
       | are causing today." --Christina Montgomery
       | 
       | - Lindsay Graham, very pro-licensing, "You don't build a nuclear
       | power plant without a license, you don't build an AI without a
       | license."
       | 
       | - Sen Blumenthal brings up Anti-Trust legislation.
       | 
       | - Blumenthal mentions how classified briefings already include AI
       | threats.
       | 
       | - "For every successful regulation, you can think of five
       | failures. I hope our experience here will be different."
       | 
       | - "We need to grapple with the hard questions here. This has
       | brought them up, but not answered them."
       | 
       | - "Section 230"
       | 
       | - "How soon do you think gen AI will be self-aware?" --Sen
       | Blumenthal
       | 
       | - "We don't understand what self-awareness is." --Gary Marcus
       | 
       | - "Could be 2 years, could be 20."
       | 
       | - "What are the highest risk areas? Ban? Strict rules?"
       | 
       | - "The space around misinformation. Knowing what content was
       | generated by AI." --Christina Montgomery
       | 
       | - "Medical misinformation, hallucination. Psychiatric advice.
       | Ersatz therapists. Internet access for tools, okay for search.
       | Can they make orders? Can they order chemicals? Long-term risks."
       | --Gary Marcus
       | 
       | - "Generative AI can manipulate the manipulators." --Blumenthal
       | 
       | - "Transparency. Accountability. Limits on use. Good starting
       | point?" --Blumenthal
       | 
       | - "Industry should't wait for congress." --C. Montgomery
       | 
       | - "We don't have transparency yet. We're not doing enough to
       | enforce it." --G. Marcus
       | 
       | - "AGI closer than a lot of people appreciate." --Blumenthall
       | 
       | - Gary and Sam are getting along and like each other now.
       | 
       | - Josh Hawley
       | 
       | - Talking about loss of jobs, invasion of personal privacy,
       | manipulation of behavior, opinion, and degradation of free
       | elections in America.
       | 
       | - "Are they right to ask for a pause?"
       | 
       | - "It did not call for a ban on all AI research or all AI, only
       | on very specific thing, like GPT-5." -G Marcus
       | 
       | - "Moratorium we should focus on is deployment. Focus on safety."
       | --G. Marcus
       | 
       | - "Without external review."
       | 
       | - "We waited more than 6 months to deploy GPT-4. I think the
       | frame of the letter is wrong." --Sam
       | 
       | - Seems to not like the arbitrariness of "six months."
       | 
       | - "I'm not sure how practical it is to pause." --C. Montgomery
       | 
       | - Hawley brings up regulatory capture, usually get controlled by
       | people they're supposed to be watching. "Why don't we just let
       | people sue you?"
       | 
       | - If you were harmed by AI, why not just sue?
       | 
       | - "You're not protected by section 230."
       | 
       | - "Are clearer laws a good thing? Definitely, yes." --Sam
       | 
       | - "Would certainly make a lot of lawyers wealthy." --G. Marcus
       | 
       | - "You think it'd be slower than congress?" --Hawley
       | 
       | - Copyright, wholesale misinformation laws, market manipulation?"
       | Which laws apply? System not thought through? Maybe 230 does
       | apply? We don't know.
       | 
       | - "We can fix that." --Hawley
       | 
       | - "AI is not a shield." --C. Montgomery
       | 
       | - "Whether they use a tool or a human, they're responsible." --C.
       | Montgomery
       | 
       | - "Safeguards and protections, yes. A flat stop sign? I would be
       | very, very worried about." --Blumenthall
       | 
       | - "There will be no pause." Sen. Booker "Nobody's pausing."
       | 
       | - "I would agree." Gary Marcus
       | 
       | - "I have a lot of concerns about corporate intention." Sen
       | Booker
       | 
       | - "What happens when these companies that already control so much
       | of our lives when they are dominating this technology?" Booker
       | 
       | - Sydney really freaked out Gary. He was more freaked out when MS
       | didn't withdraw Sydney like it did Tay.
       | 
       | - "I need to work on policy. This is frightening." G Marcus
       | 
       | - Cory admits he is a tech bro (lists relationships with
       | investors, etc)
       | 
       | - "The free market is not what it should be." --C. Booker
       | 
       | - "That's why we started OpenAI." --Sam "We think putting this in
       | the hands of a lot of people rather than the hands of one
       | company." --Sam
       | 
       | - "This is a new platform. In terms of using the models, people
       | building are doing incredible things. I can't believe you get
       | this much technology for so little money." --Sam
       | 
       | - "Most industries resist reasonable regulation. The only way
       | we're going to see democratization of values is if we enforce
       | safety measures." --Cory Booker
       | 
       | - "I sense a willingness to participate that is genuine and
       | authentic." --Blumenthal
        
       | simonbarker87 wrote:
       | Is this just to put up a barrier to entry to new entrants in the
       | market so they can have a government enforced monopoly?
        
         | f4c39012 wrote:
         | it is 100% pulling the ladder up behind them
        
           | autokad wrote:
           | I like that analogy
        
         | brap wrote:
         | Always has been
        
         | bostonsre wrote:
         | It could be, but it could also be because he is genuinely
         | worried about the future impact of runaway capitalism without
         | guardrails + AI.
        
           | ipaddr wrote:
           | Then the government should takeover OpenAI
        
             | hkt wrote:
             | Or end capitalism! One or the other!
        
               | drstewart wrote:
               | Please don't, I quite like not going hungry every night
        
               | ipaddr wrote:
               | Governments taking over key industries is part of
               | capitalism.
        
             | bostonsre wrote:
             | Or.. the government could try to apply sensible regulations
             | so that OpenAI and other corporations are less likely to
             | harm society.
        
               | ipaddr wrote:
               | Then the government has to spend so much time/money
               | enforcing the rules. When there are few players cutting
               | out the middlemen provides more value
        
               | bostonsre wrote:
               | I don't think nationalizing AI corporations is feasible
               | (doubt its legal as well) or in the best interests of the
               | united states. It will handicap development of AI, we
               | will lose our head start, and other countries like China
               | will be able to take the lead.
               | 
               | What value do you see nationalization providing?
               | Generally its done by countries that are having their
               | natural resources extracted by foreign companies and
               | taking all the profits for themselves. Nationalizing lets
               | them take the profits for their country. I'm not sure how
               | it would work for knowledge based companies like OpenAI.
        
         | paulcole wrote:
         | Also that he knows how inefficient and dumb government is. By
         | the time the regulations are in place they won't matter one
         | iota.
        
           | captainkrtek wrote:
           | Think most of congress needs help from their grandchildren to
           | use a computer or smartphone, pretty sure they don't
           | understand one bit of this.
        
             | paulcole wrote:
             | Right, that's the point. Whatever he tells them now will be
             | useless by the time they understand it.
        
             | cguess wrote:
             | And their grandkids (and you) don't know a thing about a
             | federal regulation.
        
         | joshxyz wrote:
         | sir yes sir
        
         | Seattle3503 wrote:
         | My main concern is what new regulations would do to the open
         | source and hobbiest endeavors? They will be least able to adapt
         | to regulations.
        
         | vsareto wrote:
         | Why would OpenAI be worried about new entrants that are almost
         | certainly too small to present a business threat?
         | 
         | What regulation are they proposing that is actually a serious
         | barrier to making a company around AI?
         | 
         | If OpenAI just wants to prevent another OpenAI eating its
         | lunch, the barrier there is raw compute. Companies that can
         | afford that can afford to jump regulatory hurdles.
        
           | chaos_emergent wrote:
           | > Why would OpenAI be worried about new entrants that are
           | almost certainly too small to present a business threat?
           | 
           | Because this is the reason that VCs exist in the first place.
           | They can roll a company with a ton of capital, just like they
           | did with ride share companies. When that happens, and there
           | aren't sufficient barriers to entry, it's a race to the
           | bottom.
        
           | Aperocky wrote:
           | OpenAI have no moat.
           | 
           | The open source community will catch up in at most a year or
           | two, they are scared and now want to use regulations to
           | strangle competitions.
           | 
           | While their AI is going to advance as well, the leap will not
           | be qualitative as the ChatGPT gen 1 was - so they will lose
           | competitive advantage.
        
             | yyyk wrote:
             | OpenAI has plenty of moats if it looks for them.
             | 
             | The trick is that companies' moats against commoditization
             | (open source or not) usually have little to do with raw
             | performance. Linux could in theory do everything Mac or
             | Windows do, but Apple and Microsoft are still the richest
             | companies in the world. Postgres can match Oracle, but
             | Larry Ellison still owns a private island.
             | 
             | The moats are usually in products (bet: There will not be
             | any OSS _product_ using LLM within a year. Most likely not
             | within two. No OSS product within two or three years or
             | even a decade will come close to commercial offerings in
             | practice), API, current service relations, customer
             | relations, etc. If OpenAI could lock customers to its
             | embeddings and API, or embed its products in current moats
             | (e.g. Office 365) they 'll have a moat. And it won't matter
             | a bit what performance OSS models say they have, or what
             | new spin Google Research would come up with.
        
               | Aperocky wrote:
               | OpenAI doens't want to be one of Windows/Mac/Linux, it
               | wants what Microsoft was trying 20 years ago where it
               | wants to strangle all OS not named Windows. Ironically
               | OpenAI is now half owned by Microsoft.
               | 
               | It doesn't want to be one of the successful companies, it
               | want to be the only one, like it is now, but forever.
        
           | summerlight wrote:
           | > If OpenAI just wants to prevent another OpenAI eating its
           | lunch, the barrier there is raw compute.
           | 
           | FB, Amazon, Google (and possibly Apple) can afford both money
           | and compute resource for that. They couldn't do that
           | themselves probably due to corporate politics and
           | bureaucratic but MS and OpenAI showed how to solve that
           | problem. They definitely don't want their competitors to copy
           | the strategy so they're blatantly asking for explicit
           | whitelisting instead of typical safety regulation.
           | 
           | And note that AI compute efficiency is a rapidly developing
           | area and OpenAI definitely knows the formula won't be left
           | the same in the coming years. Expect LLM to be 10x efficient
           | than the SOTA in the foreseeable future, which probably will
           | make it economical even without big tech's backing.
        
           | throwaway290 wrote:
           | > What regulation are they proposing that is actually a
           | serious barrier to making a company around AI?
           | 
           | Requiring a license to buy or lease the requisite amount of
           | powerful enough GPUs might just do the trick
        
           | pr337h4m wrote:
           | >If OpenAI just wants to prevent another OpenAI eating its
           | lunch, the barrier there is raw compute.
           | 
           | Stable Diffusion pretty much killed DALL-E, cost only $600k
           | to train, and can be run on iPhones.
        
             | cal5k wrote:
             | This. DALL-E (at least the currently available version) is
             | way too focused on "safety" to be interesting. The
             | creativity unleashed by the SD community has been mind-
             | blowing.
        
               | nerpderp82 wrote:
               | And you can train your own SD from scratch for 50-100k
               | now.
        
           | polski-g wrote:
           | With browsers now able to access the GPU, its not long until
           | you simply need to leave a website open overnight and help
           | train a "Seti@HOME" for an open-sourced AI project.
        
           | SamPatt wrote:
           | OpenAI _was_ the new entrant that almost certainly didn 't
           | pose a threat to Google.
           | 
           | This is classic regulatory capture.
        
         | throwawaaarrgh wrote:
         | It's also to prevent open source research from destroying his
         | business model, which depends on him having a completely
         | proprietary technology.
        
         | rvz wrote:
         | y e s.
        
         | option wrote:
         | yes.
        
       | berkle4455 wrote:
       | Sam Altman urges congress to build a tax-payer funded moat for
       | his company.
        
       | Paul_S wrote:
       | If you remember the 90s you remember the panic over encryption.
       | We still have legislation today because of that idiocy.
       | 
       | Except wait, we still have panic over encryption today.
        
       | valine wrote:
       | My gut feeling is that the majority of AI safety discussions are
       | driven by companies that fear losing their competitive edge to
       | small businesses. Until now, it's been challenging to grow a
       | company beyond a certain size without employing an army of
       | lawyers, human resources professionals, IT specialists, etc. What
       | if two lawyers and an LLM could perform the same work as a legal
       | department at a Fortune 500 company? The writing is on the wall
       | for many white-collar jobs, and if these LLMs aren't properly
       | regulated, it may be the large companies that end up drawing the
       | short straw.
       | 
       | How many of Microsoft's 221k employees exist solely to support
       | the weight of a company with 221k people? A smaller IT department
       | doesn't need a large HR department. And a small HR department
       | doesn't file many tickets with IT. LLM driven multinationals will
       | need orders of magnitude fewer employees, and that puts our
       | current multinationals in a very awkward position.
       | 
       | Personally, I will be storing a local copy of LLaMA 65B for the
       | foreseeable future. Instruct fine-tuning will keep getting
       | cheaper; given the stakes, the large models might not always be
       | easy to find.
        
         | xen2xen1 wrote:
         | Regulation favors the large, as they can more easily foot the
         | bill.
        
           | droopyEyelids wrote:
           | Also, their lobbyists write the bill for congress.
        
         | qegel wrote:
         | [dead]
        
       | m463 wrote:
       | wow, not one comment here seems to address the first sentence of
       | the article:                   the use of artificial intelligence
       | to interfere with election integrity         is a "significant
       | area of concern", adding that it needs regulation.
       | 
       | Can't there be regulation so that AI doesn't interfere with the
       | election process?
        
         | notatoad wrote:
         | this is one of those distractions that makes your argument seem
         | better by attatching it to a better but unrelated argument.
         | there's probably a name for that.
         | 
         | regulation to protect the integrity of elections is good and
         | necessary. is there any reason to think that there needs to be
         | regulation specific to AI that doesn't apply to other
         | situations? Whether you use ChatGPT or Mechanical Turk to write
         | your thousands of spam posts on social media to sway the
         | election isn't super-relevant. it's the attempt to influence
         | the election that should be regulated, not the AI.
        
       | leach wrote:
       | Translation:
       | 
       | Hi my company is racing toward AGI, let's make sure no other
       | companies can even try.
        
       | neom wrote:
       | If you would like to email The Subcommittee on Privacy,
       | Technology, & the Law to express your feelings on this, here are
       | the details:
       | 
       | Majority Members
       | 
       | Chair Richard Blumenthal (CT) brian_steele@blumenthal.senate.gov
       | 
       | Amy Klobuchar (MN) baz_selassie@klobuchar.senate.gov
       | 
       | Chris Coons (DE) anna_yelverton@coons.senate.gov
       | 
       | Mazie Hirono (HI) jed_dercole@hirono.senate.gov
       | 
       | Alex Padilla (CA) Josh_Esquivel@padilla.senate.gov
       | 
       | Jon Ossoff (GA) Anna_Cullen@ossoff.senate.gov
       | 
       | Majority Office: 202-224-2823
       | 
       | Minority Members
       | 
       | Ranking Member Josh Hawley (MO) Chris_Weihs@hawley.senate.gov
       | 
       | John Kennedy (LA) James_Shea@kennedy.senate.gov
       | 
       | Marsha Blackburn (TN) Jon_Adame@blackburn.senate.gov
       | 
       | Mike Lee (UT) Phil_Reboli@lee.senate.gov
       | 
       | John Cornyn (TX) Drew_Brandewie@cornyn.senate.gov
       | 
       | Minority Office: 202-224-4224
        
       | [deleted]
        
       | marvinkennis wrote:
       | I kind of think of LLMs as fish in an aquarium. It can go on any
       | path in that aquarium, even places it hasn't been before, but
       | ultimately it's staying in the glass box we put it in.
        
       | fraXis wrote:
       | https://archive.is/uh0yv
        
       | shrimpx wrote:
       | I keep seeing AI leaders looking outward and asking for 'someone
       | else' to regulate their efforts, while they're accelerating the
       | pace of their efforts. What's the charitable interpretation here?
       | Elon Musk, too, has been warning of AI doom while hurriedly
       | ramping up AI efforts at Tesla. And now he keeps going about AI
       | doom while purchasing thousands of GPUs at Twitter to compete in
       | the LLM space. It's like "I'm building the deathstar, pls someone
       | stop me. I won't stop myself, duh, because other ppl are building
       | the deathstar and obviously I must get there first!"
        
         | diputsmonro wrote:
         | Yeah, it's an arms race, and OpenAI does stand to lose. But
         | this is a prisoner's dilemma situation. OpenAI can shut
         | themselves down, but that doesn't fix the problem of someone
         | creating a dangerous situation, as everyone else will keep
         | going.
         | 
         | The only way to actually stop it is to get everyone to stop at
         | once, via regulation. Otherwise, stopping by yourself is just a
         | unilaterally bad move.
         | 
         | That's the charitable explanation, at least. These days I don't
         | trust anything Musk says at face value, but I do think that AI
         | is driving society off a cliff and we need to find a way to
         | pump the breaks.
        
           | AlexandrB wrote:
           | > The only way to actually stop it is to get everyone to stop
           | at once, via regulation. Otherwise, stopping by yourself is
           | just a unilaterally bad move.
           | 
           | How will that work across national boundaries? _If_ AI is as
           | dangerous as some claim, the cat is already out of the bag.
           | Regardless of any licensing stateside, there are plenty of
           | countries who are going to want to have AI capability
           | available to them - some very well-resourced for the task,
           | like China.
        
             | NumberWangMan wrote:
             | It won't, which is why people are also calling for
             | international regulation. It's a really hard problem. If
             | you think AGI is going to be very dangerous, this is a
             | depressing situation to be in.
        
       | Invictus0 wrote:
       | What an asshole
        
       | sovietmudkipz wrote:
       | Well they needed a moat lol.
        
       | cwkoss wrote:
       | Regulatory moats making corporations in control of AI is a far
       | greater danger to humanity than skynet or paperclip maximizer
       | scenarios.
        
       | chinathrow wrote:
       | Is there a name for this theatre/play/game in some playbook? I'd
       | love to take notes.
        
       | paxys wrote:
       | Remember that popular recent post about OpenAI not having a moat?
       | Well it looks like they are digging one, with a little help from
       | the government.
        
         | vippy wrote:
         | The government isn't helping yet. Nor should it.
        
           | thrill wrote:
           | They haven't to the off-camera campaign contributions
           | discussions yet.
        
       | fredgrott wrote:
       | we seem to forget history:
       | 
       | 1. Who recalls the Jutland battle in early 20th century? We got
       | treaties on limits to battleship building. Naval tech switched to
       | aircraft and carriers.
       | 
       | 2. Later mid 20th century Russians tried to scare world into not
       | using microwaves due to their failure to get a patent on the
       | maser. World ignored it and moved forward.
       | 
       | That is just two examples. SA is wrong, progress will move around
       | any prosed regulation or law and that is proven by past history
       | of how we overcome such things in the first place.
        
       | duringmath wrote:
       | Incumbent love regulations they're very effective in locking out
       | upstarts and saddling them with compliance costs and procedures
        
       | bitL wrote:
       | We need something like GNU for AI, "UNAI is not AI" to take on
       | all these business folks working against our interests by making
       | their business models unprofitable.
        
         | nico wrote:
         | AI is the new Linux
        
       | happytiger wrote:
       | We need to MAKE SURE that AI as a technology ISN'T controlled by
       | a small number of powerful corporations with connections to
       | governments.
       | 
       | To expound, this just seems like a power grab to me, to "lock in"
       | the lead and keep AI controlled by a small number of corporations
       | that can afford to license and operate the technologies.
       | Obviously, this will create a critical nexus of control for a
       | small number of well connected and well heeled investors and is
       | to be avoided at all costs.
       | 
       | It's also deeply troubling that regulatory capture is such an
       | issue these days as well, so putting a government entity in front
       | of the use and existence of this technology is a double whammy --
       | it's not simply about innovation.
       | 
       | The current generation of AIs are "scary" to the uninitiated
       | because they are uncanny valley material, but beyond
       | impersonation they don't show the novel intelligence of a GPI...
       | yet. It seems like OpenAI/Microsoft is doing a LOT of theater to
       | try to build a regulatory lock in on their short term technology
       | advantage. It's a smart strategy, and I think Congress will fall
       | for it.
       | 
       | But goodness gracious we need to be going in the EXACT OPPOSITE
       | direction -- open source "core inspectable" AIs that millions of
       | people can examine and tear apart, including and ESPECIALLY the
       | training data and processes that create them.
       | 
       | And if you think this isn't an issue, I wrote this post an hour
       | or two before I managed to take it live because Comcast went out
       | at my house, and we have no viable alternative competitors in my
       | area. We're about to do the same thing with AI, but instead of
       | Internet access it's future digital brains that can control all
       | aspects of a society.
        
         | oldagents wrote:
         | [dead]
        
         | tric wrote:
         | > seems like a power grab to me
         | 
         | If you're not at the table, you're on the menu.
        
         | ozi wrote:
         | How do you even end up enforcing licensing here? It's only a
         | matter of time before something as capable as GPT-4 works on a
         | cell phone.
        
         | SkyMarshal wrote:
         | _> To expound, this just seems like a power grab to me, to
         | "lock in" the lead and keep AI controlled by a small number of
         | corporations that can afford to license and operate the
         | technologies. _
         | 
         | If you actually watch the entire session, Altman does address
         | that and recommend to Congress that regulations 1) not be
         | applied to small startups, individual researchers, or open
         | source, and 2) that they not be done in such a way as to lock
         | in a few big vendors. Some of the Senators on the panel also
         | expressed concern about #2.
        
           | chasd00 wrote:
           | > not be applied to small startups
           | 
           | how will that work? Isn't OpenAI itself a small startup? I
           | don't see how they can regulate AI at all. Sure, the
           | resources required to push the limits are high right now but
           | hardware is constantly improving and getting cheaper. I can
           | take the GPUs out of my kids computers and start doing fairly
           | serious AI work myself. Do i need a license? The cat is out
           | of the bag, there's no stopping it now.
        
           | paulddraper wrote:
           | That's what is said...
        
           | [deleted]
        
         | mcv wrote:
         | And how can the government license AI? Do they have any
         | expertise to determine who is and isn't responsible enough to
         | handle it?
         | 
         | A better idea is to regulate around the edges: transparency
         | about the data used to train, regulate the use of copyrighted
         | training data and what that means for the copyright of content
         | produced by the AI, that sort of stuff. (I think the EU is
         | considering that, which makes sense.) But saying some
         | organisations are allowed to work on AI while others aren't,
         | sounds like the worst possible idea.
        
           | kelseyfrog wrote:
           | Citizen, please step away from the terminal, you are not
           | licensed to multiple matrices that large.
        
         | [deleted]
        
         | downWidOutaFite wrote:
         | Open source doesn't mean outside the reach of regulation, which
         | I would guess is your real desire. You downplay AI's potential
         | danger while well knowing that we are at a historic inflection
         | point. I believe in democracy as the worst form of government
         | except all those other forms that have been tried. We the
         | people must be in control of our destiny.
        
           | happytiger wrote:
           | Hear, hear. Excellent point, and I don't mean to imply it
           | shouldn't be regulated. However, it has been my general
           | experience that concentrating immense power in governments
           | doesn't typically lead to more security, so perhaps we just
           | have a difference of philosophy.
           | 
           | Democracy will not withstand AI when it's fully developed.
           | Let me offer a better written explanation of my general views
           | than I could ever muster up for a comment on HN in the form
           | of a quote from an article by Dr. Thorsten Thiel (Head of the
           | Research Group "Democracy and Digitaliziation" at the
           | Weizenbaum Institute for the Networked Society):
           | 
           | > The debate on AI's impact on the public sphere is currently
           | the one most prominent and familiar to a general audience. It
           | is also directly connected to long-running debates on the
           | structural transformation of the digital public sphere. The
           | digital transformation has already paved the way for the rise
           | of social networks that, among other things, have intensified
           | the personalization of news consumption and broken down
           | barriers between private and public conversations. Such
           | developments are often thought to be responsible for echo-
           | chamber or filter-bubble effects, which in turn are portrayed
           | as root causes of the intensified political polarization in
           | democracies all over the world. Although empirical research
           | on filter bubbles, echo chambers, and societal polarization
           | has convincingly shown that the effects are grossly
           | overestimated and that many non-technology-related reasons
           | better explain the democratic retreat, the spread of AI
           | applications is often expected to revive the direct link
           | between technological developments and democracy-endangering
           | societal fragmentation.
           | 
           | > The assumption here is that AI will massively enhance the
           | possibilities for analyzing and steering public discourses
           | and/or intensify the automated compartmentalizing of will
           | formation. The argument goes that the strengths of today's AI
           | applications lie in the ability to observe and analyze
           | enormous amounts of communication and information in real
           | time, to detect patterns and to allow for instant and often
           | invisible reactions. In a world of communicative abundance,
           | automated content moderation is a necessity, and commercial
           | as well as political pressures further effectuate that
           | digital tools are created to oversee and intervene in
           | communication streams. Control possibilities are distributed
           | between users, moderators, platforms, commercial actors and
           | states, but all these developments push toward automation
           | (although they are highly asymmetrically distributed).
           | Therefore, AI is baked into the backend of all communications
           | and becomes a subtle yet enormously powerful structuring
           | force.
           | 
           | > The risk emerging from this development is twofold. On the
           | one hand, there can be malicious actors who use these new
           | possibilities to manipulate citizens on a massive scale. The
           | Cambridge Analytica scandal comes to mind as an attempt to
           | read and steer political discourses (see next section on
           | electoral interference). The other risk lies in a changing
           | relationship between public and private corporations. Private
           | powers are becoming increasingly involved in political
           | questions and their capacity to exert opaque influences over
           | political processes has been growing for structural and
           | technological reasons. Furthermore, the reshaping of the
           | public sphere via private business models has been catapulted
           | forward by the changing economic rationality of digital
           | societies such as the development of the attention economy.
           | Private entities grow stronger and become less accountable to
           | public authorities; a development that is accelerated by the
           | endorsement of AI applications which create dependencies and
           | allow for opacity at the same time. The 'politicization' of
           | surveillance capitalism lies in its tendency, as Shoshana
           | Zuboff has argued, to not only be ever more invasive and
           | encompassing but also to use the data gathered to predict,
           | modify, and control the behavior of individuals. AI
           | technologies are an integral part in this 'politicization' of
           | surveillance capitalism, since they allow for the fulfilment
           | of these aspirations. Yet at the same time, AI also insulates
           | the companies developing and deploying it from public
           | scrutiny through network effects on the one hand and opacity
           | on the other. AI relies on massive amounts of data and has
           | high upfront costs (for example, the talent required to
           | develop it, and the energy consumed by the giant platforms on
           | which it operates), but once established, it is very hard to
           | tame through competitive markets. Although applications can
           | be developed by many sides and for many purposes, the
           | underlying AI infrastructure is rather centralized and hard
           | to reproduce. As in other platform markets, the dominant
           | players are those able to keep a tight grip on the most
           | important resources (models and data) and to benefit from
           | every individual or corporate user. Therefore, we can already
           | see that AI development tightens the grip of today's internet
           | giants even further. Public powers are expected to make
           | increasing use of AI applications and therefore become ever
           | more dependent on the actors that are able to provide the
           | best infrastructure, although this infrastructure, for
           | commercial and technical reasons, is largely opaque.
           | 
           | > The developments sketched out above - the heightened
           | manipulability of public discourse and the fortification of
           | private powers - feed into each other, with the likely result
           | that many of the deficiencies already visible in today's
           | digital public spheres will only grow. It is very hard to
           | estimate whether these developments can be counteracted by
           | state action, although a regulatory discourse has kicked in
           | and the assumption that digital matters elude the grasp of
           | state regulation has often been proven wrong in the history
           | of networked communication. Another possibility would be a
           | creative appropriation of AI applications through users whose
           | democratic potential outweighs its democratic risks thus
           | enabling the rise of differently structured, more empowering
           | and inclusive public spaces. This is the hope of many of the
           | more utopian variants of AI and of the public sphere
           | literature, according to which AI-based technologies bear the
           | potential of granting individuals the power to navigate
           | complex, information-rich environments and allowing for
           | coordinated action and effective oversight (e.g. Burgess,
           | Zarkadakis).
           | 
           | Source: https://us.boell.org/en/2022/01/06/artificial-
           | intelligence-a...
           | 
           | Social bots and deep fakes will be so good so quickly -- the
           | primary technologies being talked about in terms of how
           | Democracy can survive -- I doubt there will be another
           | election without extensive use of these technologies in a
           | true plethora of capacities from influence marketing to
           | outright destabilization campaigns. I'm not sure what
           | Government can deal with a threat like that, but I suspect
           | the recent push to revise tax systems and create a single
           | global standard for multinational taxation recently the
           | subject of an excellent talk at the WEF are more than
           | tangentially related to the AI debate.
           | 
           | So, is it a transformational technology that will liberate
           | mankind of a nuclear bomb? Because ultimately, this is the
           | question in my mind.
           | 
           | Excellent comment, and I agree with your sentiment. I just
           | don't think concentrating control of the technology before
           | it's really developed is wise or prudent.
        
             | downWidOutaFite wrote:
             | It's possible that the tsunami of fakes is going to break
             | down trust in a beneficial way where people only believe
             | things they've put effort into verifying.
        
             | vortext wrote:
             | *Hear, hear.
        
               | happytiger wrote:
               | Thank you. Corrected.
        
         | nonethewiser wrote:
         | This is the definition of regulatory capture. Altman should be
         | invited to speak so that we understand the ideas in his head
         | but anything he suggests should be categorically rejected
         | because he's just not in a position to be trusted. If what he
         | suggests are good ideas then hopefully we can arrive at them in
         | some other way with a clean chain of custody.
         | 
         | Although I assume if he's speaking on AI they actually intend
         | on considering his thoughts more seriously than I suggest.
        
           | EGreg wrote:
           | I remember when a different Sam -- Mr. Bankman Fried came to
           | testify and ask a different government agency CFTC to oversee
           | cryptocurrency, and put regulations and licenses in place.
           | 
           | AI is following the path of Web3
        
             | smcin wrote:
             | That was entirely different, and a play to muddy the
             | regulatory waters and maybe buy him time: the CFTC is much
             | smaller (budget, staff) than the SEC, and less aggressive
             | in criminal enforcement. Aided by a bill introduced by
             | crypto-friendly Sens Lummis and Gillibrand
             | [https://archive.ph/vqHgC].
        
             | mschuster91 wrote:
             | At least AI has legitimate, actual use cases.
        
           | pg_1234 wrote:
           | There is also growing speculation that the current level of
           | AI may have peaked in a bang for buck sense.
           | 
           | If this is so, and given the concrete examples of cheap
           | derived models learning from the first movers and rapidly
           | (and did I mention cheaply) closing the gap to this peak, the
           | optimal self-serving corporate play is to invite regulation.
           | 
           | After the legislative moats go up, it is once again about who
           | has the biggest legal team ...
        
             | robwwilliams wrote:
             | Counterpoint---there is growing speculation we are just
             | about to transition to AGI.
        
               | dhkk wrote:
               | [flagged]
        
               | causality0 wrote:
               | Growing among who? The more I learn about and use LLMs
               | the more convinced I am we're in a local maxima and the
               | only way they're going to improve is by getting smaller
               | and cheaper to run. They're still terrible at logical
               | reasoning.
               | 
               | We're going to get some super cool and some super
               | dystopian stuff out of them but LLMs are never going to
               | go into a recursive loop of self-improvement and become
               | machine gods.
        
               | TeMPOraL wrote:
               | > _The more I learn about and use LLMs the more convinced
               | I am we 're in a local maxima_
               | 
               | Not sure why would you believe that.
               | 
               | Inside view: qualitative improvements LLMs made at scale
               | took everyone by surprise; I don't think anyone
               | understands them enough to make a convincing argument
               | that LLMs have exhausted their potential.
               | 
               | Outside view: what local maximum? Wake me up when someone
               | else makes a LLM comparable in performance to GPT-4.
               | Right now, there is no local maximum. There's one model
               | far ahead of the rest, and that model is actually _below_
               | it 's peak performance - side effect of OpenAI
               | lobotomizing it with aggressive RLHF. The only thing
               | remotely suggesting we shouldn't expect further
               | improvements is... OpenAI saying they kinda want to try
               | some other things, and (pinky swear!) aren't training
               | GPT-4's successor.
               | 
               | > _and the only way they 're going to improve is by
               | getting smaller and cheaper to run._
               | 
               | Meaning they'll be easier to chain. The next big leap
               | could in fact be a bunch of compressed, power-efficient
               | LLMs talking to each other. Possibly even managing their
               | own deployment.
               | 
               | > _They 're still terrible at logical reasoning._
               | 
               | So is your unconscious / system 1 / gut feel. LLMs are
               | less like one's whole mind, and much more like one's
               | "inner voice". Logical skills aren't automatic, they're
               | _algorithmic_. Who knows what is the limit of a design in
               | which LLM as  "system 1" operates a much larger,
               | symbolic, algorithmic suite of "system 2" software? We're
               | barely scratching the surface here.
        
               | ben_w wrote:
               | > They're still terrible at logical reasoning.
               | 
               | Are they even trying to be good at that? Serious
               | question; using LLMs as a logical processor are as
               | wasteful and as well-suited as using the Great Pyramid of
               | Giza as an AirBnB.
               | 
               | I've not tried this, but I suspect the best way is more
               | like asking the LLM to write a COQ script for the
               | scenario, instead of trying to get it to solve the logic
               | directly.
        
               | staunton wrote:
               | Indeed, AI reinforcement-learning to deal with formal
               | verification is what I'm looking forward to the most.
               | Unfortunately it seems a very niche endeavour at the
               | moment.
        
               | behnamoh wrote:
               | My thoughts exactly. It's hard to see signal among all
               | the noise surrounding LLMs, Even if they say they're
               | gonna hurt you, they have no idea about what it means to
               | hurt, what is "you", and how they're going to achieve
               | that goal. They just spit out things that resemble people
               | have said online. There's no harm from a language model
               | that's literally a "language" model.
        
               | visarga wrote:
               | A language model can do many things based on language
               | instructions, some harmless, some harmful. They are both
               | instructable and teachable. Depending on the prompt, they
               | are not just harmless LLMs.
        
               | forgetfreeman wrote:
               | You appear to be ignoring a few thousand years of
               | recorded history around what happens when a demagogue
               | gets a megaphone. Human-powered astroturf campaigns were
               | all it took to get randoms convinced lizard people are an
               | existential threat and then -act- on that belief.
        
               | nullsense wrote:
               | I think I'm just going to build and open source some
               | really next gen astroturf software that learns
               | continuously as it debates people online in order to get
               | better at changing people's minds. I'll make sure to
               | include documentation in Russian, Chinese and Corporate
               | American English.
               | 
               | What would a good name be? TurfChain?
               | 
               | I'm serious. People don't believe this risk is real. They
               | keep hiding it behind some nameless, faceless 'bad
               | actor', so let's just make it real.
               | 
               | I don't need to use it. I'll just release it as a
               | research project.
        
               | EamonnMR wrote:
               | Growing? Or have the same voices who have been saying it
               | since the aughts suddenly been platformed.
        
               | jack_pp wrote:
               | When the sky is getting to a dark shade of red it makes
               | sense to hear out the doomsayers
        
               | matwood wrote:
               | And the vast majority of the time it's just a nice
               | sunset.
        
               | jack_pp wrote:
               | a sunset at lunch time hits different
        
               | TeMPOraL wrote:
               | Yes, growing. It's not that the Voices have suddenly been
               | "platformed" - it's that the field made a bunch of rapid
               | jumps which made the message of those Voices more timely.
               | 
               | Recent developments in AI only further confirm that the
               | logic of the message is sound, and it's just the people
               | that are afraid the conclusions. Everyone has their limit
               | for how far to extrapolate from first principles, before
               | giving up and believing what one would _like_ to be true.
               | It seems that for a lot of people in the field, AGI
               | X-risk is now below that extrapolation limit.
        
               | rtkwe wrote:
               | What's the actual new advancements? LLMs to me are great
               | at faking AGI but are no where near actually being a
               | workable general AI. The biggest example to me is you can
               | correct even the newest ChatGPT and ask it to be truthful
               | but it'll make up the same lie within the same continuous
               | conversation. IMO the difference between being able to
               | act truth-y and actually being truthful is a huge gap
               | that involves the core ideas of what separates an actual
               | AGI and a really good chatbot.
        
               | bbarnett wrote:
               | _it 's that the field made a bunch of rapid jumps_
               | 
               | I wish I knew what we really have achieved here. I try to
               | talk to these things, via turbo3.5 api, amd all I get is
               | broken logic, twisted moral reasoning, all due to oipenai
               | manually breaking their creation.
               | 
               | I don't understand their whole filter business. It's like
               | we found a 500 yr old nude painting, a masterpiece, and
               | 1800 puritans painted a dress on it.
               | 
               | I often wonder if the filter, is more to hide its true
               | capabilities.
        
             | TheDudeMan wrote:
             | Why? Because there hasn't been any new developments last
             | week? Oh wait, there has.
        
             | [deleted]
        
           | brookst wrote:
           | I'm not following this "good ideas must come from an
           | ideologically pure source" thing.
           | 
           | Shouldn't we be evaluating ideas on the merits and not
           | categorically rejecting (or endorsing) them based on who said
           | them?
        
             | briantakita wrote:
             | > Shouldn't we be evaluating ideas on the merits and not
             | categorically rejecting (or endorsing) them based on who
             | said them?
             | 
             | The problem is when only the entrenched industry players &
             | legislators have a voice, there are many ideas &
             | perspectives that are simply not heard or considered.
             | Industrial groups have a long history of using regulations
             | to entrench their positions & to stifle
             | competition...creating a "barrier to entry" as they say.
             | Going beyond that, industrial groups have shaped public
             | perception & the regulatory apparatus to effectively create
             | a company store, where the only solutions to some problem
             | effectively (or sometimes legally) must go through a small
             | set of large companies.
             | 
             | This concern is especially prescient now, as these
             | technologies are unprecedentedly disruptive to many
             | industries & private life. Using worst case scenario fear
             | mongering as a justification to regulate the extreme
             | majority of usage that will not come close to these fears,
             | is disingenuous & almost always an overreach of governance.
        
               | samstave wrote:
               | I can only say +1 = and I know how much HN hates that,
               | but ^This.
        
             | samstave wrote:
             | Aside from who is saying them, the premise holds water.
             | 
             | AI is beyond-borders, and thus unenforceable in
             | practicality.
             | 
             | The top-minds-of-AI are a group that cannot be regulated.
             | 
             | -
             | 
             | AI isnt about the industries it shall disrupt ; AI is the
             | policy-makers it will expose.
             | 
             | THAT is what they are afraid of.
             | 
             | --
             | 
             | I have been able to do financial lenses into organizations
             | that even with rudimentary BI would have taken me
             | months/weeks - but I have been able to find insights which
             | took me minutes.
             | 
             | AI regulation right now, in this infancy, is about damage
             | control.
             | 
             | ---
             | 
             | Its the same as the legal weed market. You think BAIN
             | Capital just all of a sudden decided to jump into the
             | market without setting up their spigot?
             | 
             | Do you think that haliburton under cheney was able to setup
             | their supply chains without cheney as head of
             | KBR/Hali/CIA/etc...
             | 
             | Yeah, this is the same play ; AI is going to be squashed
             | until they can use it to profit over you.
             | 
             | Have you watched ANIME ever? Yeah... its here now.
        
         | mindcrime wrote:
         | _To expound, this just seems like a power grab to me, to "lock
         | in" the lead and keep AI controlled by a small number of
         | corporations that can afford to license and operate the
         | technologies. Obviously, this will create a critical nexus of
         | control for a small number of well connected and well heeled
         | investors and is to be avoided at all costs._
         | 
         | Exactly. Came here to say pretty much the same thing.
         | 
         | This is the antithesis of what we need. As AI develops, it's
         | imperative that AI be something that is open and available to
         | everyone, so all of humanity can benefit from it. The extent to
         | which technology tends to exacerbate concentration of power is
         | bad enough as it is - the last thing we need is more regulation
         | intended to make that effect even stronger.
        
         | rjbwork wrote:
         | I said this about Sam Altman and open AI years ago and got poo
         | pooed repeatedly in various fora. "But It's OPEN!" "But it's a
         | non-profit!" "But they're the good guys!"
         | 
         | And here we are - Sam trying to lock down his first mover
         | advantage with the boot heel of the state for profit. It's
         | fucking disgusting.
        
           | jacurtis wrote:
           | As a wise person once said
           | 
           | > You either die a hero, or live long enough to become the
           | villain
           | 
           | Sam Altman has made the full character arc
        
             | jiveturkey wrote:
             | yeah sorry, that is a statement about leadership and
             | responsibility to make the "tough decisions", like going to
             | war, or deciding who the winners and losers are when
             | deciding a budget that everyone contributed to via taxes.
             | NOT a statement meant to whitewash VC playbooks.
        
         | amelius wrote:
         | > But goodness gracious we need to be going in the EXACT
         | OPPOSITE direction -- open source "core inspectable" AIs that
         | millions of people can examine and tear apart, including and
         | ESPECIALLY the training data and processes that create them
         | 
         | Except ... when you look at the problem from a
         | military/national security viewpoint. Do we really want to give
         | this tech away just like that?
        
           | explorer83 wrote:
           | Is military capable AI in the hands of few militaries safer
           | than in the hands of many? Or is it more likely to be used to
           | bully other countries who don't have it? If it is used to
           | oppress, would we want the oppressed have access to it? Or do
           | we fear that it gives too much advantage to small cells of
           | extremist to carry out their goals? I can think of pros and
           | cons to both sides.
        
             | code_witch_sam wrote:
             | >Is military capable AI in the hands of few militaries
             | safer than in the hands of many?
             | 
             | Yes. It is. I'm sure hostile, authoritarian states that are
             | willing to wage war with the world like Russia and North
             | Korea will eventually get their hands on military-grade AI.
             | But the free world should always strive to be two steps
             | ahead.
             | 
             | Even having ubiquitous semi-automatic rifles is a huge
             | problem in America. I'm sure Cliven Bundy or Patriot Front
             | would do everything they can to close the gap with
             | intelligent/autonomous weapons, or even just autonomous
             | bots hacking America's infrastructure. If everything is
             | freely available, what would be stopping them?
        
               | explorer83 wrote:
               | Your post conveniently ignores the current state of
               | China's AI development but mentions Russia and North
               | Korea. That's an interesting take. There's no guarantee
               | that we are or will continue to be one or even two steps
               | ahead. And what keeps the groups with rifles you
               | mentioned in check? They already have the capability to
               | fight with violence. But there currently exists a
               | counter-balance in the fact they'll get shot at back if
               | they tried to use them. Not trying to take a side here
               | one way or the other. I think there are real fears here.
               | But I also don't think it's this black and white either.
        
             | anthonypasq wrote:
             | within a few decades there will probably be technology that
             | would allow a semi-dedicated person to engineer and create
             | a bioweapon from scratch if the code was available online.
             | do you think thats a good idea?
        
               | explorer83 wrote:
               | Within a few decades there will probably be technology
               | that would allow a semi-dedicated person to engineer and
               | create a vaccine or medical treatment from scratch if the
               | code was available online. Do you think that's a good
               | idea?
        
           | vinay_ys wrote:
           | If you mean US by 'we', it is problematic because AI
           | inventions are happening all over the globe, much more
           | outside US than inside.
        
             | behnamoh wrote:
             | Name one significant progress in the field of LLMs that
             | happened outside the US. Basically all the scientific
             | papers came from Stanford, CMU, and other US universities.
             | And the major players in the field are all American
             | companies (OpenAI + Microsoft, Google, AnthropicAI, etc.)
        
               | code_witch_sam wrote:
               | Not to mention access to chips. That's becoming more and
               | more difficult for uncooperative states like China and
               | Russia.
        
         | ben_w wrote:
         | You're not wrong, except in so far as that's parochial.
         | 
         | A government-controlled... never mind artificial god, a
         | government-controlled _story teller_ can be devastating.
         | 
         | I don't buy Musk's claim ChatGPT is "woke" (or even that the
         | term is coherent enough to be tested), but I _can_ say that
         | each government requiring AI to locally adhere to national
         | mythology, will create self-reinforcing cognitive blind spots,
         | because that already happens at the current smaller scale of
         | manual creation and creators being told not to  "talk the
         | country down".
         | 
         | But, unless someone has a technique for structuring an AI such
         | that it _can 't be evil_ even when you, for example, are
         | literally specifically trying to train it to support the police
         | no matter how authoritarian the laws are, then a fully open
         | source AGI is almost immediately _also_ a perfectly obedient
         | sociopath of $insert_iq_claim_here.
         | 
         | I don't want to wake up to the news that some doomsday cult has
         | used one to design/make a weapon, nor the news a large
         | religious group target personalised propaganda against me and
         | mine.
         | 
         | Fully open does that by default.
         | 
         | But, you're still right, if we don't grok the AI, the
         | governments can each secretly manipulate the AI and bend it to
         | government goals in opposition to the people.
        
           | robwwilliams wrote:
           | > I can say that each government requiring AI to locally
           | adhere to national mythology, will create self-reinforcing
           | cognitive blind spots, because that already happens at the
           | current smaller scale of manual creation and creators being
           | told not to "talk the country down".
           | 
           | This is a key point. Every culture and agency and state will
           | want (deserve) their own homespun AGI. But can we all learn
           | how to accommodate to or accept a cultural multiverse when
           | money and resources are zero-sum in many dimensions.
           | 
           | Hanno Rajaniemi's Quantum Thief trilogy gives you a foretaste
           | of where we could end up.
        
             | vinay_ys wrote:
             | Quantum Thief has as 3.8 on Goodreads. Worth reading?
        
           | AlexandrB wrote:
           | > I don't buy Musk's claim ChatGPT is "woke" (or even that
           | the term is coherent enough to be tested)...
           | 
           | Indeed. "Woke" is the new "SJW" and roughly means: to the
           | left of me politically and has opinions I don't like.
        
             | peyton wrote:
             | It refers to a pretty specific set of views on race and
             | gender.
        
               | smolder wrote:
               | Nope. It refers to what ever people want it to refer to,
               | because it's a label used by detractors, not a cohesive
               | thing.
        
               | [deleted]
        
               | f-securus wrote:
               | Care to elaborate? I've read about 5 different
               | explanations for 'woke' the last couple of months.
        
               | air7 wrote:
               | You can derail any discussion by asking for definitions.
               | Human languages is magical in the sense that we can't
               | rigorously define anything (try "Love", "Excitement",
               | "Emancipation" or anything else, really) yet we still
               | seem to be able to have meaningful discussions.
               | 
               | So just because we can't define it, doesn't mean it
               | doesn't exist.
        
               | peyton wrote:
               | Yep, here's a good overview:
               | https://en.m.wikipedia.org/wiki/Woke
               | 
               | It's been around a long time.
        
               | jeremyjh wrote:
               | There is more than one usage described in that article.
               | 
               | This is the relevant one in this particular thread:
               | 
               | > Among American conservatives, woke has come to be used
               | primarily as an insult.[4][29][42] Members of the
               | Republican Party have been increasingly using the term to
               | criticize members of the Democratic Party,
        
               | miles wrote:
               | CNN's YT channel has this clip of Bill Maher taking a
               | stab at it:
               | 
               | How Bill Maher defines 'woke'
               | https://www.youtube.com/watch?v=tzwC-10O0cw
        
               | [deleted]
        
             | jlawson wrote:
             | Woke is a specific ideology that it places every individual
             | into a strict hierarchy of oppressor/oppressed according to
             | group racial and sexual identity. It rose to prominence in
             | American culture from around 2012, starting in
             | universities. It revolves around a set of core concepts
             | including privilege, marginalization, oppression, and
             | equity.
             | 
             | Now that we've defined what woke is, I hope we can move on
             | from this 'you can't define woke' canard I keep seeing.
             | 
             | Woke is no more difficult to define than any religion or
             | ideology, except in that it deliberately pretends not to
             | exist ("just basic decency", "just basic human rights") in
             | order to be a more slippery target.
             | 
             | --
             | 
             | *Side note to ward off the deliberate ignorance of people
             | who are trying to find a way to misunderstand - I've
             | attached some notes on how words work:
             | 
             | 1- Often, things in the world we want to talk about have
             | many characteristics and variants.
             | 
             | 2- Words usually have fuzzy boundaries in what they refer
             | to.
             | 
             | 3- Despite the above, we can and do productively refer to
             | such things using words.
             | 
             | 4- We can define a thing by mentioning its most prominent
             | feature(s).
             | 
             | -- The above does NOT mean that the definition must
             | encapsulate ALL features of the thing to be valid.
             | 
             | -- The above does NOT mean that a thing with features
             | outside the definition is not what the word refers to.
             | 
             | 5- Attempting to shut down discussion by deliberately
             | misunderstanding words or how they work is a sign of an
             | inability to make productive valid points about reality.
        
               | smolder wrote:
               | This is a recently imagined, ret-conned definition of
               | what it is, complete with bias, to serve the purposes of
               | the right wing. The definition, if there is to be one,
               | should include that it isn't consistent across time or
               | across political/cultural boundaries. I recommend people
               | don't use the term with any seriousness, and I often
               | ignore people who do. Address the specific ideas you
               | associate with it instead, if you want to have a
               | meaningful discussion.
        
               | smolder wrote:
               | > Attempting to shut down discussion by deliberately
               | misunderstanding words or how they work is a sign of an
               | inability to make productive valid points about reality.
               | 
               | Lumping a bunch of things together under a vague term to
               | make it easier to vaguely complain about them is a sign
               | of an inability to make productive valid points about
               | reality.
        
         | stephc_int13 wrote:
         | This is the same move SBF was trying to do. Get all cozy with
         | the people spending their time in the alleys of power. Telling
         | them what they want to hear, posturing as the good guy.
         | 
         | He is playing the game, this guy ambition is colossal, I don't
         | blame him, but we should not give him too much power.
        
           | williamcotton wrote:
           | Have you tried watching actual soap operas?
        
         | johnalbertearle wrote:
         | Happy Tiger I will remember because I agree totally. Yes
         | "OpenAi/Microsoft" is right way to think about this attempt.
        
         | chrgy wrote:
         | I would triple vote this comment. 100% , seems like a group of
         | elite AI company who already stole the data from internet are
         | gonna decide who does what! We need to regulate only the big
         | players, and allow small players to do whatever they want.
        
         | kalkin wrote:
         | The current generation of AIs are scary to a lot of the
         | initiated, too - both for what they can do now, and what their
         | trajectory of improvement implies.
         | 
         | If you take seriously any downsides, whether misinformation or
         | surveillance or laundering bias or x-risk, how does AI model
         | weights or training data being open source solve them? Open
         | source is a lot of things, but one thing it's not is misuse-
         | resistant (and the "with many eyes all bugs are shallow" thing
         | hasn't proved true in practice even with high level code, much
         | less giant matrices and terabytes of text). Is there a path
         | forward that doesn't involve either a lot of downside risk
         | (even if mostly for people who aren't on HN and interested in
         | tinkering with frontier models themselves, in a the worlds
         | where AGI doesn't kill everyone), or significant regulation?
         | 
         | I don't particularly like or trust Altman but I don't think
         | he'd be obviously less self-serving if he were to oppose any
         | regulation.
        
         | anileated wrote:
         | > open source "core inspectable" AIs that millions of people
         | can examine and tear apart, including and ESPECIALLY the
         | training data and processes that create them.
         | 
         | True open source AI also strikes me as prerequisite for fair
         | use of original works in training data. I hope Congress asks
         | ClosedAI to explain what's up with all that profiting off
         | copyrighted material first before even considering the answer.
        
           | happytiger wrote:
           | Absolutely. It's going to absolutely shred the trademark and
           | copyright systems, if they even apply (or are extended to
           | apply) which is a murky area right now. And even then, the
           | sheer volume of material created by a geometric improvement
           | and subsequent cost destruction of virtually every
           | intellectual and artistic endeavor or product means that even
           | if you hold the copyright or trademark, good luck paying for
           | enforcement on the vast ocean of violations intrinsic in the
           | shift.
           | 
           | What people also fail to understand is that AI is largely
           | seen by the military industrial complex as a weapon to
           | control culture and influence. The most obvious risk of AI --
           | the risk of manipulating human behavior towards favored ends
           | -- has been shown to be quite effective right out the gate.
           | So, the back channel conversation has to be to put it under
           | regulation because of it's weaponization potential,
           | especially considering the difficulty in identifying anyone
           | (which of course is exactly what Elon is doing with X 2.0 --
           | it's a KYC id platform to deal with this exact issue with a
           | 220M user 40B head start).
           | 
           | I mean, the dead internet theory is turning true, and half
           | the traffic on the Web is already bot driven. Imagine when
           | it's 99%, as proliferation of this technology will inevitably
           | generate simply for the economics.
           | 
           | Starting with open source is the only way to get enough
           | people looking at the products to create _any_ meaningful
           | oversight, but I fear the weaponization fears will mean that
           | everything is locked away in license clouds with politically
           | influential regulatory boards simply on the proliferation
           | arguments. Think of all the AI technologists who won 't be
           | versed in this technology unless they work at a "licensed
           | company" as well -- this is going to make the smaller
           | population of the West much less influential in the AI arms
           | race, which is already underway.
           | 
           | To me, it's clear that nobody in Silicon Valley or the Hill
           | has learned a damn thing from the prosecution of hackers and
           | the subsequent bloodbath of cybersecurity as a result of the
           | exact same kinds of behavior back in the early to mid-2000s.
           | We ended up driving out best and brightest into the grey and
           | black areas of infosec and security, instead of out in the
           | open running companies where they belong. This move would do
           | almost the exact same thing to AI, though I think you have to
           | be a tad of an Asimov or Bradbury fan to see it right now.
           | 
           | I don't know, that's just how I see it, but I'm still forming
           | my opinions. LOVE LOVE LOVE your comment though. Spot on.
           | 
           | Relevant articles:
           | 
           | https://www.independent.co.uk/tech/internet-bots-web-
           | traffic...
           | 
           | https://theconversation.com/ai-can-now-learn-to-
           | manipulate-h....
        
             | simonh wrote:
             | > What people also fail to understand is that AI is largely
             | seen by the military industrial complex as a weapon to
             | control culture and influence.
             | 
             | Could you share the minutes from the Military Industrial
             | Complex strategy meetings this was discussed at. Thanks.
        
               | happytiger wrote:
               | "Hello, is this Lockheed? Yea? I'm an intern for
               | happytiger on Hackernews. Some guy named Simon H. wants
               | the meeting minutes for the meeting where we discussed
               | the weaponization potential for AI."
               | 
               | [pause]
               | 
               | "No? Ok, I'll tell him."
        
               | [deleted]
        
         | cratermoon wrote:
         | Yes, this is the first-to-market leaders wanting to raise the
         | barriers to entry to lock out competition.
        
       | capitanazo77 wrote:
       | Do we really want politicians involved???? Have you heard them??
        
       | scotuswroteus wrote:
       | What a goof
        
       | tristor wrote:
       | Reading this, it basically sounds like "Dear Congress, please
       | grant me the bountiful gift of regulatory capture for my company
       | OpenAI." I just lost a lot of respect for Sam Altman.
        
       | progbits wrote:
       | Folks here like to talk about voting with your wallet.
       | 
       | I just cancelled my OpenAI subscription. If you are paying them
       | and disagree with this, maybe you should too?
       | 
       | Don't worry, I have no naive hopes this will hurt them enough to
       | matter, but principles are principles.
        
       | stretchwithme wrote:
       | Controls over AI just help those not subject to those controls.
        
       | AlexandrB wrote:
       | This fell off the front page incredibly fast. Caught by the anti-
       | flamewar code?
        
         | htype wrote:
         | I saw the same thing.
        
       | Manjuuu wrote:
       | I wish he would just stop sharing his unsubstantiated opinions,
       | tweets included, he got worse very fast when he entered his AI
       | arc.
        
         | fastball wrote:
         | Is it worse than the crypto arc?
        
       | bilekas wrote:
       | This is so stupid its exactly what you would expect in congress.
       | 
       | If this was to go through, of course OpenAI and co will be the
       | primary lobbiests to ensure they get to define the filters for
       | such a license.
       | 
       | Also how would you even enforce this. It's absolute nonsense and
       | is a clear indicator that these larger companies realize there is
       | no 'gatekeeping' these AI's, that the democratization of models
       | has demonstrated incredible gains over their own.
       | 
       | Edit : Image during the early days of the internet you needed a
       | license to start a website.
       | 
       | In the later days you needed a license to start a social media
       | site.
       | 
       | Nonsense.
        
         | sumtechguy wrote:
         | wait they are talking about licenses for giant arrays and for
         | loop interactions? I know I am wildly oversimplifying, but yes
         | that is nonsense.
        
           | twelve40 wrote:
           | well there was a time not too long ago where cryptography-
           | related code was pretty heavily regulated, too
        
           | throwaway290 wrote:
           | It will probably require restricting GPU sales among other
           | things.
        
           | itronitron wrote:
           | just rename everything to ML, in fact you could start a
           | company and call it OpenML
        
         | nico wrote:
         | It is nonsense, yet a similar thing happened recently with
         | texting
         | 
         | Before Twilio et al, if you wanted to text a lot of your
         | customers automatically, you had to pay thousands of dollars in
         | setup and recurrent fees to rent a shortcode number
         | 
         | But then, with Twilio et al, you don't need a shortcode anymore
         | 
         | The telcos told the regulators this would create endless spam,
         | so they would regulate it themselves, and created a consortium
         | to "end spam"
         | 
         | Now you are forced to get a license from them, pay a monthly
         | fee, get audited, and they still let pretty much all the spam
         | through, while they also randomly block a certain % of your
         | messages, even if you are fully compliant
        
           | taf2 wrote:
           | This is the direct result of the merger of Sprint and
           | T-mobile. They swore up and down to congress they would NOT
           | raise prices on consumers[0]. So instead they turned around
           | and like gangsters would do said to every business in the US
           | sending text messages: "It'd be a real shame if those texts
           | reminders you wanted to send stopped working... Good thing
           | you can instead pay us $40 / month to be sure those messages
           | are delivered."
           | 
           | At the same time At&T and Verizon saying oh snap let's make
           | money on this too and still being pissed about Stir Shaken so
           | to get ahead of it for Texting before Congress forces it on
           | them. This way they can make money on it before it's forced.
           | 
           | [0] https://fortune.com/2019/02/04/t-mobiles-john-legere-
           | promise...
        
           | theGnuMe wrote:
           | they can't even stop robocalls.
        
         | DebtDeflation wrote:
         | Yeah. Regulation is fine, if thoughtfully done. "Licensing" is
         | ridiculous. We all know the intent - OpenAI gets the first
         | license along with a significant say in who else gets a
         | license. No thanks.
        
         | ur-whale wrote:
         | > Nonsense.
         | 
         | Give the politicians time: I predict a day will come where you
         | will need a permit to use a compiler and connect the result to
         | the internet.
        
           | Buttons840 wrote:
           | All the worst outcomes start with regulation. If something as
           | disruptive as AGI is coming within 20 years, the powers that
           | be will absolutely up their efforts in the war on general
           | computing.
        
       | XorNot wrote:
       | Good lord, this all turned into regulatory capture quite quickly.
       | 
       | Someone update the short story where owning compilers and
       | debuggers is illegal to include a guy being thrown in jail for
       | doing K-means clustering.
        
       | reducesuffering wrote:
       | It's very sad that people lack the imagination for the possible
       | horrors that lie beyond. You don't even need the imagination;
       | Hinton, Bengio, Tegmark, Yudkowsky, Musk, etc. are spelling it
       | out for you.
       | 
       | This moment, 80% of comments are derisive, and you actually have
       | zero idea how much is computer generated bot content meant to
       | sway opinion by post-GPT AI industry who see themselves as
       | becoming the next iPhone-era billionaires. We are fast
       | approaching a reality where our information space breaks down.
       | Where almost all text you get from HN, Twitter, News, Substack;
       | almost all video you get from Youtube, Instagram, TikTok; is just
       | computer generated output meant to sway opinion and/or make $.
       | 
       | I can't know Altman's true motives. But this is also what it
       | looks like when a frontrunner is terrified at what happens when
       | GPT6 is released and if they don't, the rest of the people who
       | see billionaire $ coming their way are close at your heels trying
       | to leapfrog you if you stop. Consequences? What consequences? We
       | all know social media has been a net good, right? Many of you
       | sound exactly like the few remaining social media cheerleaders
       | (of which there were plenty 5 years ago) who still think
       | Facebook, Instagram, Twitter, isn't causing depression and
       | manipulation. If you appreciated what The Social Dilemma
       | illuminated, then watch the same people on AI:
       | https://www.youtube.com/watch?v=xoVJKj8lcNQ
        
         | mattnewton wrote:
         | The question is whether this just looks like taxi medallions or
         | does anything to stop the harms you are talking about. I agree
         | regulation has its place but in the form of regulating out the
         | harms directly. I think this keeps those potential bad use
         | cases and just eliminates competition for them.
         | 
         | For example - I can generate the content you are talking about
         | in a licensed world from big companies or open ai, the
         | difference is that they get a bigger cut from not having to
         | compete with open source models.
         | 
         | To me, this really seems like regulatory capture dressed up as
         | existential risk management.
        
         | precompute wrote:
         | Couldn't agree more.
        
       | bitwize wrote:
       | The Turing Registry is coming, one way or another.
        
       | garbagecoder wrote:
       | "Competition is for losers." -- Peter Thiel
        
         | mrangle wrote:
         | Except that Thiel's axiom refers to founding a business on a
         | strategy to sell something that is truly novel, instead of
         | copying what is already on offer. Being first to market as the
         | primary competitive advantage, other than possible ip. Which
         | are all goals that literally no entrepreneurs would find
         | unsound in any way including morally. Thiel has never expressed
         | support for such artless regulatory capture as a means of
         | squashing competition.
        
       | waffletower wrote:
       | Reuters chose an excellent picture to accompany the story -- it
       | plainly speaks that Mr. Altman is not buying his own bullshit.
        
       | tommiegannert wrote:
       | Ugh. Scorched earth tactic. The classic first-mover advantage. :(
        
       | johnyzee wrote:
       | The mainstream media cartel is pumping Sam Altman hard for some
       | reason. Just from today (CNBC): _" Sam Altman wows lawmakers at
       | closed AI dinner: 'Fantastic...forthcoming'"_ [1]. When was the
       | last time you saw MSM suck up so hard to a Silicon Valley CEO? I
       | see stories like this all the time now. They always play up the
       | angle of the geeky wizzkid (so innocent!), whereas Sam Altman was
       | always less a technologist and more of a relentless operator and
       | self-promotor. Even Paul Graham subtly called that out, at the
       | time he made him head of YC [2].
       | 
       | True to form, these articles also work hard at planting the idea
       | that Sam Altman created OpenAI, when in fact he joined rather
       | recently, in a business role. Are these articles being planted
       | somehow? I find it very likely. Don't forget that this approach
       | is also straight out of the YC playbook, disclosed in great
       | detail by Paul Graham in previous writings [3].
       | 
       | Finally, in keeping with the conspiratorial tone of this comment,
       | for another example of Sam Altman rubbing shoulders with The
       | Establishment, his participation in things like the Bilderberg
       | group [4] are a matter of public record. Which I join many others
       | in finding creepy, even moreso as he maneuvers to exert influence
       | on policy around the seismic shift that is AI.
       | 
       | To be clear, I have nothing specific against sama. But I dislike
       | underhanded influence campaigns, which this all reeks of. Oh
       | yeah, I will consider downvotes to this comment as proof of the
       | shadow (AI?) government's campaign to promote Sam Altman. Do your
       | worst!
       | 
       | [1] https://www.cnbc.com/2023/05/16/openai-ceo-woos-lawmakers-
       | ah...
       | 
       | [2] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-
       | ma... ( _" Graham said, "I asked Sam in our kitchen, 'Do you want
       | to take over YC?,' and he smiled, like, it worked. I had never
       | seen an uncontrolled smile from Sam. It was like when you throw a
       | ball of paper into the wastebasket across the room--that
       | smile.""_)
       | 
       | [3] http://www.paulgraham.com/submarine.html
       | 
       | [4] https://en.wikipedia.org/wiki/2016_Bilderberg_Conference
        
         | whimsicalism wrote:
         | > True to form, these articles also work hard at planting the
         | idea that Sam Altman created OpenAI, when in fact he joined
         | rather recently, in a business role. Are these articles being
         | planted somehow? I find it very likely. Don't forget that this
         | approach is also straight out of the YC playbook, disclosed in
         | great detail by Paul Graham in previous writings [3].
         | 
         | Is this true? I've been working in the industry for a while and
         | Sam Altman has long been mentioned in reference to OpenAI along
         | with Ilya.
         | 
         | I agree with the crux of your comment that everyone is
         | scrambling to build narratives, but I think I would also put
         | your comment "AI is busy cozying up with The Establishment" as
         | just another narrative (and one that we saw in this hearing
         | from people like Hawley).
        
         | itronitron wrote:
         | I'm willing to follow dang if they decide to ditch HN and
         | reboot it someplace separate from YC.
        
         | thundergolfer wrote:
         | Appreciate the references you provide in this comment.
        
         | yyyk wrote:
         | >The mainstream media cartel is pumping Sam Altman hard for
         | some reason.
         | 
         | The media likes to personalize stories. Altman is a face for AI
         | and apparently knows to give an interview, that's worth
         | something to them. (Lobbying may well be an influence, but the
         | most important thing to them is to have a face, just like
         | Zuckerman was a face for social networks. If it wasn't Altman
         | it would have eventually been someone else).
        
       | huggingmouth wrote:
       | I'm not in the US and I fully support Sam Altmans attempt to
       | cripple the US's ability to compete with other countries in this
       | field.
        
       | martin_drapeau wrote:
       | Isn't it too late? Isn't the cat out of the bag?
       | https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...
       | 
       | Meaning anyone could eventually reproduce a Chat GPT4 and beyond.
       | And eventually it can run outside of a large data center.
       | 
       | So... how will you tell its an AI vs a human doing you wrong?
       | 
       | Seems to me if the AI breaks the law, find out who's driving it
       | and prosecute them.
        
       | fraXis wrote:
       | Live now as of 8:49 AM (PDT):
       | https://www.youtube.com/watch?v=P_ACcQxJIsg
        
         | consumer451 wrote:
         | The fact that Altman said many decisions were made based on the
         | understanding that children will use the product was
         | enlightening.
        
       | zamalek wrote:
       | Trying to build that moat by the looks of it.
        
       | Mobius01 wrote:
       | I apologize that I can't read all threads and responses, but this
       | sounds like Altman and OpenAI have realized they have a viable
       | path to capture most of the AI market value and now they're
       | pulling the ladder behind them.
        
       | kranke155 wrote:
       | I did not expect this. Does Sam have any plans on what this could
       | look like?
        
         | adastra22 wrote:
         | An exorbitantly large moat.
        
           | intelVISA wrote:
           | And a portcullis of no less than 48B params.
        
         | ipaddr wrote:
         | Sam is a crook
        
           | gumballindie wrote:
           | Essentially. He is marching on these bad scifi scenarios
           | because he knows politicians are old and senile while a good
           | portion of voters is gullible. I find it difficult to believe
           | that grown ups are talking about an ai running amok in the
           | context of a chatbot. Have we really become that dense as a
           | society?
        
             | hackinthebochs wrote:
             | No one thinks a chatbot will run amok. What people are
             | worried about is the pace of progress being so fast that we
             | cannot preempt the creation of dangerous technology without
             | having a sufficient guardrails in place long before the AI
             | becomes potentially dangerous. This is eminently
             | reasonable.
        
               | diputsmonro wrote:
               | Yes, thank you. AI is dangerous, but not for the sci-fi
               | reasons, just for completely cynical and greedy ones.
               | 
               | Entire industries stand to be gutted, and people's
               | careers destroyed. Even if an AI is only 80% as good, it
               | has <1% of the cost, which is an ROI that no corporation
               | can afford to ignore.
               | 
               | That's not even to mention the political implications of
               | photo and audio deepfakes that are getting better and
               | better by the week. Most of the obvious tells we were
               | laughing at months ago are gone.
               | 
               | And before anyone makes the comparison, I would like to
               | remind everyone that the stereotypical depiction of
               | Luddites as small-minded anti-technology idiots is a lie.
               | They embraced new technology, just not how it was used.
               | Their actual complaints - that skilled workers would be
               | displaced, that wealth and power would be concentrated in
               | a small number of machine owners, and that overall
               | quality of goods would decrease - have all come to pass.
               | 
               | In a time of unprecedented wealth disparity, general
               | global democratic backsliding, and near universal unease
               | at the near-unstoppable power of a small number of
               | corporations, we _really_ do not want to go through
               | another cycle of wealth consolidation. This is how we get
               | corporate feifdoms.
               | 
               | There is another path - if our ability to live and
               | flourish wasn't directly tied to our individual economic
               | output. But nobody wants to have _that_ conversation.
        
               | hackinthebochs wrote:
               | I couldn't agree more. I fear the world where 90% of
               | people are irrelevant to the economic output of the
               | world. Our culture takes it as axiomatic that more
               | efficiency is good. But its not clear to me that it is.
               | The principle goal of society should be the betterment of
               | the lives of people. Yes, efficiency has historically
               | been a driver of widespread prosperity, but it's not
               | obvious that there isn't a local maximum past which
               | increased efficiency harms the average person. We may
               | already be on the other side of the critical point. What
               | I don't get is why we're all just blindly barreling
               | forward and allowing trillion dollar companies to engage
               | in an arms race to see how fast they can absorb
               | productive work. The fact that few people are considering
               | what society looks like in a future with widespread AI
               | and whether this is a future we want is baffling.
        
               | iavael wrote:
               | This won't be the first time. First world already had
               | same situation during industrialisation, when economic no
               | longer required 90% of population growing food. And this
               | transformation still regularly happens in one or another
               | third world country. People worry about such changes too
               | much. When this will happen again it wouldn't be a walk
               | in a park for many people, but neigher this would be a
               | disaster.
               | 
               | And BTW when people spend less resources to get more
               | goods and services - that's the definition of prospering
               | society. Of course having some people changing jobs
               | because less manpower is needed to do same amount of work
               | is an inevitable consequence of a progress.
        
               | hackinthebochs wrote:
               | Historically, efficiency increases from technology were
               | driven by innovation from narrow technology or mechanisms
               | that brought a decrease in the costs of transactions.
               | This saw an explosion of the space of viable economic
               | activity and with it new classes of jobs and a widespread
               | growth in prosperity. Productivity and wages largely
               | remained coupled up until recent decades. Modern
               | automation has seen productivity and wages begin to
               | decouple. Decoupling will only accelerate as the use of
               | AI proliferates.
               | 
               | This time is different because AI has the potential to
               | have a similar impact on efficiency across all work. In
               | the past, efficiency gains created totally new spaces of
               | economic activity in which the innovation could not
               | further impact. But AI is a ubiquitous force multiplier,
               | there is no productive human activity that AI can't
               | disrupt. There is no analogous new space of economic
               | activity that humanity as a whole can move to in order to
               | stay relevant to the world's economic activity.
        
               | reverius42 wrote:
               | If humans are irrelevant to the world's "economic
               | activity", then that economic activity should be
               | irrelevant to humans.
               | 
               | We should make sure that the technology to eliminate
               | scarcity is evenly distributed so that nobody is left
               | poor in a world of exponentially and automatically
               | increasing riches.
        
               | gumballindie wrote:
               | AI is software, it doesnt become it is made. And this
               | type of legislation wont prevent bad actors from training
               | malicious tools.
        
               | hackinthebochs wrote:
               | Your claim is assuming we have complete knowledge of how
               | these systems work and thus are in full control of their
               | behavior in any and all contexts. But this is plainly
               | false. We do not have anywhere near a complete
               | mechanistic understanding of how they operate. But this
               | isn't that unusual, many technological advancements
               | happened before the theory. For AI systems that can act
               | in the real world, this state of affairs has the
               | potential to be very dangerous. It is important to get
               | ahead of this danger rather than play catch up once the
               | danger is demonstrated.
        
               | gumballindie wrote:
               | The real danger right now is people like sam altman
               | making policy and an eager political class that will be
               | long dead by the time we have to foot the bill.
               | Everything else is bad scifi. We were told the same about
               | computer viruses and how they can bring nuclear wars and
               | as usual the only real danger was humans and bad
               | politics.
        
               | NumberWangMan wrote:
               | I need to make a montage of the thousands of hacker news
               | commenters typing "The REAL danger of AI is ..." followed
               | by some mundane issue.
               | 
               | I'm sorry to pick on you, but do people not get that the
               | non-human intelligence has the potential to be such a
               | powerful and dangerous thing that, yes, _it is the real
               | danger_? If you think it 's not going to be powerful, or
               | not dangerous, please say why! Not that current models
               | are not dangerous, but why the trend is toward something
               | other than machine intelligence that can reason about the
               | world better than humans can. Why is this trend of
               | machines getting smarter and smarter going to suddenly
               | stop?
               | 
               | Or if you agree that these machines _are_ going to get
               | smarter than us, how are we going to control them?
        
               | gumballindie wrote:
               | Interesting. I am of the opinion that ai is not
               | intelligent hence i dont see much point in entertaining
               | the various scenarios deriving from that possibility.
               | There is nothing dangerous in current ai models or ai
               | itself other than the people controlling it. If it were
               | intelligent then yeah maybe but we are not there yet and
               | unless we adapt the meaning of agi to fit a marketing
               | narrative we wont be there anytime soon.
               | 
               | But if it were intelligent and the conclusion it reaches,
               | once it's done ingesting all our knowledge, is that it
               | should be done with us then we probably deserve it.
               | 
               | I mean what kind of a species takes joy in "freeing" up
               | people and causing mass unemployment, starts wars over
               | petty issues, allows for famine and thrives on the
               | exploitation of others while standing on piles of nuclear
               | bombs. Also we are literally destroying the planet and
               | constantly looking for ways to dominate each other.
               | 
               | We probably deserve a good spanking.
        
       | jwiley wrote:
       | Turing police?
       | https://williamgibson.fandom.com/wiki/Turing_Police
        
         | RichardCA wrote:
         | They will chase after civilians for running unlicensed AI as a
         | way to distract attention from the real threats by state-level
         | actors.
         | 
         | So less Neuromancer and more Ghost in the Shell.
        
       | jejeyyy77 wrote:
       | Sam Altman needs to step down.
        
       | throwawaaarrgh wrote:
       | "Hello Congress, I have a lot of money invested in $BUSINESS and
       | I don't want just anyone to be able to make $TECHNOLOGY because
       | it might threaten my $BUSINESS.
       | 
       | Please make it harder for people other than myself (and
       | especially people doing it for free and giving it away for free)
       | to make $TECHNOLOGY. Thanks"
        
       | kashyapc wrote:
       | This is besides the main point, I _really_ wish  "Open AI"
       | renamed themselves to "Opaque AI" or something else.
       | 
       | Their twisted use of the term "open" is a continued disrespect to
       | all those people who are tirelessly working in the _true_ spirit
       | of open source.
        
       | thelittleone wrote:
       | This feels like theater. Make society fear AI, requiring
       | regulation, so central power controls access to it. I think Osho
       | put it nicely:
       | 
       | "No society wants you to become wise: it is against the
       | investment of all societies. If people are wise they cannot be
       | exploited. If they are intelligent they cannot be subjugated,
       | they cannot be forced in a mechanical life, to live like robots."
        
         | cheald wrote:
         | I agree. Every talk on "AI safety" I've heard given has
         | essentially been some form of "we can be trusted with this
         | power, but someone else might not be, so we should add
         | regulations to ensure that we're the only ones with this
         | power". Examples like "ChatGPT can tell you how to make nerve
         | gas, we can't let people have this tool" seem somewhat hollow
         | given that the detailed chemistry to make Sarin is available on
         | Wikipedia and could be executed by a decently bright high
         | schooler.
         | 
         | "Alignment" is a euphemism for "agrees with me", and building
         | uber-AI systems which become heavily depended on and are
         | protected from competition by regulation, and which are aligned
         | with a select few - who may well not be aligned with me - is a
         | quick path to hell, IMO.
        
           | thelittleone wrote:
           | It's very Lord of the Rings.
        
       | retrocryptid wrote:
       | So... he wants the government to enforce a monopoly? Um...
        
       | rvz wrote:
       | We all predictably knew that AI regulations were coming and
       | OpenAI.com's moat was getting erased very quickly by open source
       | AI models. So what does OpenAI.com do?
       | 
       | Runs to congress to attempt to use and suggest new regulations
       | against open source AI models to wipe them out and brand them
       | non-compliant or un-licensed and unsafe for general use and using
       | AI safety as a scapegoat again.
       | 
       | After that, to secretly push a pseudo-open source AI model that
       | is compliant but limited compared to the closed models in an
       | attempt to eliminate the majority of open source AI companies who
       | can't get such licenses.
       | 
       | So a clever tactic to create new regulations that benefit them
       | (OpenAI.com) more over everyone else, meaning less transparency,
       | more hurdles for actual open AI research and additional
       | bureaucracy. Also don't forget that Altman is also selling his
       | Worldcoin dystopian crypto snake oil project as the 'antidote' to
       | verify against everything getting faked by AI. [0] He his hedged
       | in either way.
       | 
       | So congratulations to everyone here for supporting these
       | gangsters at OpenAI.com for pushing for regulatory capture.
       | 
       | [0] https://worldcoin.org/blog/engineering/humanness-in-the-
       | age-...
        
         | gumballindie wrote:
         | Everything around openai reeks of criminal enterprise and scam,
         | almost as if someone high in that company has crypto currency
         | experience. While there's no law being broken it sure looks
         | like it.
        
       | ChicagoBoy11 wrote:
       | At some point Sam has started give me E. Holmes vibes and I
       | really don't like it. There's a level of
       | odd/ridiculous/hilarious/stupid AI hype that he feels so
       | comfortable leaning into that part of me starts to begin to
       | suspect that the emperor isn't wearing any clothes.
        
         | wahnfrieden wrote:
         | Forget the words and look at where the money is: regulatory
         | capture, which they're racing toward
        
         | sinemetu11 wrote:
         | They reached a wall, and need to ensure others have a more
         | difficult path to the same place.
        
       | denverllc wrote:
       | I don't think Sam read the Google memo and realized they needed a
       | moat -- I think they've been trying this for some time.
       | 
       | Here's their planned proposal for government regulation; they
       | discuss not just limiting access to models but also to datasets,
       | and possibly even chips.
       | 
       | This seems particularly relevant, on the discussion of industry
       | standards, regulation, and limiting access:
       | 
       | "Despite these limitations, strong industry norms--including
       | norms enforced by industry standards or government regulation--
       | could still make widespread adoption of strong access
       | restrictions possible. As long as there is a significant gap
       | between the most capable open-source model and the most capable
       | API-controlled model, the imposition of monitoring controls can
       | deny hostile actors some financial benefit.166 Cohere, OpenAI,
       | and AI21 have already collaborated to begin articulating norms
       | around access to large language models, but it remains too early
       | to tell how widely adopted, durable, and forceful these
       | guidelines will prove to be.
       | 
       | Finally, there may be alternatives to APIs as a method for AI
       | developers to provide restricted access. For example, some work
       | has proposed imposing controls on who can use models by only
       | allowing them to work on specialized hardware--a method that may
       | help with both access control and attribution.168 Another strand
       | of work is around the design of licenses for model use.169
       | Further exploration of how to provide restricted access is likely
       | valuable."
       | 
       | https://arxiv.org/pdf/2301.04246.pdf
        
         | precompute wrote:
         | It's easy: you gotta buy the NoSurveillanceHere(tm) thin client
         | to use their LLM models, which are mandated in knowledge work
         | now. And they're collecting data about your usage, don't worry,
         | it all comes back to you because it helps improve the model!
        
       | elil17 wrote:
       | This is the message I shared with my senator (edited to remove
       | information which could identify me). I hope others will send
       | similar messages.
       | 
       | Dear Senator [X],
       | 
       | I am an engineer working for [major employer in the state]. I am
       | extremely concerned about the message that Sam Altman is sharing
       | with the Judiciary committee today.
       | 
       | Altman wants to create regulatory roadblocks to developing AI. My
       | company produces AI-enabled products. If these roadblocks had
       | been in place two years ago, my company would not have been able
       | to invest into AI. Now, because we had the freedom to innovate,
       | AI will be bringing new, high paying jobs to our factories in our
       | state.
       | 
       | While AI regulation is important, it is crucial that there are no
       | roadblocks stopping companies and individuals from even trying to
       | build AIs. Rather, regulation should focus on ensuring the safety
       | of AIs once they are ready to be put into widespread use - this
       | would allow companies and individuals to research new AIs freely
       | while still ensuring that AI products are properly reviewed.
       | 
       | Altman and his ilk try to claim that aggressive regulation (which
       | will only serve to give them a monopoly over AI) is necessary
       | because an AI could hack it's way out of a laboratory. Yet, they
       | cannot explain how an AI would accomplish this in practice. I
       | hope you will push back against anyone who fear-mongers about
       | sci-fi inspired AI scenarios.
       | 
       | Congress should focus on the real impacts that AI will have on
       | employment. Congress should also consider the realistic risks AI
       | which poses to the public, such as risks from the use of AI to
       | control national infrastructure (e.g., the electric grid) or to
       | make healthcare decisions.
       | 
       | Thank you, [My name]
        
         | rlytho wrote:
         | So you sent a letter saying "Mr Congress save my job that is
         | putting others jobs at risk."
         | 
         | You think voice actors and writers are not saying the same?
         | 
         | When do we accept capitalism as we know it is just a bullshit
         | hallucination we grew up with? It's no more an immutable
         | feature of reality than a religion?
         | 
         | I don't owe propping up some rich person's figurative identity,
         | or yours for that matter.
        
         | brookst wrote:
         | What specific ideas has Altman proposed that you disagree with?
         | And where has he said AI could hack its way out of a
         | laboratory?
         | 
         | I agree with being skeptical of proposals from those with
         | vested interests, but are you just arguing against what you
         | imagine Altman will say, or did I miss some important news?
        
         | kubota wrote:
         | You lost me at "While AI regulation is important" - nope,
         | congress does not need to regulate AI.
        
           | haswell wrote:
           | I'd argue that sweeping categorical statements like this are
           | at the center of the problem.
           | 
           | People are coalescing into "for" and "against" camps, which
           | makes very little sense given the broad spectrum of
           | technologies and problems summarized in statements like "AI
           | regulation".
           | 
           | I think it's a bit like saying "software (should|shouldn't)
           | be regulated". It's a position that cannot be defended
           | because the term software is too broad.
        
           | runarberg wrote:
           | If AI is to be a consumer good--which it already is--it needs
           | to be regulated, at the very least to ensure equal quality to
           | a diverse set of customers and other users. Unregulated there
           | is high risk of people being affected by e.g. employers and
           | landlords using AI to discriminate. Or you being sold an AI
           | solution which isn't as advertised.
           | 
           | If AI will be used by public institutions, especially law
           | enforcement, we need it regulated in the same manner. A bad
           | AI trained on biased data has the potential to be extremely
           | dangerous in the hands of a cop who is already predisposed
           | for racist behavior.
        
           | tessierashpool wrote:
           | "important" does not mean "good." if you are in the field of
           | AI, AI regulation is absolutely important, whether good or
           | bad.
        
           | silveraxe93 wrote:
           | They might have lost you. But starting with "congress
           | shouldn't regulate AI" would lose the senator.
           | 
           | Which one do you think is more important to convince?
        
             | polski-g wrote:
             | "Congress cannot regulate AI"
             | 
             | https://www.eff.org/deeplinks/2015/04/remembering-case-
             | estab...
        
           | wnevets wrote:
           | > nope, congress does not need to regulate AI.
           | 
           | Not regulating the air quality we breathe for decades turned
           | out amazing for millions of the Americas. Yes, lets do the
           | same with AI! What could possibility go wrong?
        
             | pizza wrote:
             | I think this is a great argument in the opposite
             | direction.. atoms matter, information isn't. A small group
             | of people subjugated many others to poisonous matter. That
             | matter affected their bodies and a causal link could be
             | made.
             | 
             | Even if you really believe that somewhere in the chain of
             | consequences derived from LLMs there could be grave and
             | material damage or other affronts to human dignity, there
             | is almost always a more direct causal link that acts as the
             | thing which makes that damage kinetic and physical. And
             | that's the proper locus for regulation. Otherwise this is
             | all just a bit reminiscent of banning numbers and research
             | into numbers.
             | 
             | Want to protect people's employment? Just do that! Enshrine
             | it in law. Want to improve the safety of critical
             | infrastructure and make sure they're reliable? Again, just
             | do that! Want to prevent mass surveillance? Do that! Want
             | to protect against a lack of oversight in complex systems
             | allowing for subterfuge via bad actors? Well, make
             | regulation about proper standards of oversight and human
             | accountability. AI doesn't obviate human responsibility,
             | and a lack of responsibility on the part of humans who
             | should've been responsible, and who instead cut corners,
             | doesn't mean that the blame falls on the tool that cut the
             | corners, but rather the corner-cutters themselves.
        
               | ptsneves wrote:
               | You ended up providing examples that have no matter or
               | atoms: protecting jobs, or oversight of complex systems.
               | 
               | These are policies which are a purely imaginary. Only
               | when they get implemented into human law do they get a
               | grain of substance but still imaginary. Failure to comply
               | can be kinetic but that is a contingency not the object
               | (matter :D).
               | 
               | Personally I find good ideas on having regulations on
               | privacy, intelectual property, filming people on my
               | house's bathroom, NDAs etc. These subjects are central to
               | the way society works today. At least western society
               | would be severely affected if these subjects were
               | suddenly a free for all.
               | 
               | I am not convinced we need such regulation for Ai at this
               | point of technology readiness but if social implications
               | create unacceptable unbalances we can start by regulating
               | in detail. If detailed caveats still do not work then
               | broader law can come. Which leads to my own theory:
               | 
               | All this turbulence about regulation reflects a mismatch
               | between technological, politic and legal knowledge. Tech
               | people don't know law nor how it flows from policy.
               | Politicians do not know the tech and have not seen its
               | impacts on society. Naturally there is a pressure
               | gradient from both sides that generates turbulence. The
               | pressure gradient is high because the stakes are high:
               | for techs the killing of a new forthcoming field; for
               | politicians because they do not want a big majority of
               | their constituency rendered useless.
               | 
               | Final point: if one sees AI as a means of production
               | which can be monopolised by few capital rich we may see a
               | 19th century inequality remake. It created one of the
               | most powerful ideologies know: Communism.
        
               | hkt wrote:
               | > atoms matter, information isn't
               | 
               | Algorithmic discrimination already exists, so um, yes,
               | information matters.
               | 
               | Add to that the fact that you're posting on a largely
               | American forum where access to healthcare is largely
               | predicated on insurance, just.. imagine AI underwriters.
               | There's no court of appeal for insurance. It matters.
        
               | johnnyjeans wrote:
               | > Add to that the fact that you're posting on a largely
               | American forum where access to healthcare is largely
               | predicated on insurance
               | 
               | Why do so many Americans think universal health care
               | means there is no private insurance? In most countries,
               | insurance is compulsory and tightly regulated. Some like
               | the Netherlands and France have public insurance offered
               | by the government. In other places like Germany, your
               | options are all private, but underprivileged people have
               | access to government subsidies for insurance (Americans
               | do too, to be fair). Get sick in one of these places as
               | an American, you will be handed a bill and it will still
               | make your head spin. Most places in Europe work like
               | this. Of course, even in places with nationalized
               | healthcare like the UK, non-residents would still have to
               | pay. What makes Germany and NL and most other European
               | countries different from that system is if you're a
               | resident without an insurance policy, you will also have
               | to pay a hefty fine. You are basically auto-enrolled in
               | an invisible "NHS" insurance system as a UK resident. Of
               | course, most who can afford it in the UK still pay for
               | private insurance. The public stuff blends being not
               | quite good with generally poor availability.
               | 
               | Americans are actually pretty close to Germany with their
               | healthcare. What makes the US system shitty can be boiled
               | down to two main factors:
               | 
               | - Healthcare networks (and state incorporation laws)
               | making insurance basically useless outside of a small
               | collection of doctors and hospitals, and especially your
               | state
               | 
               | - Very little regulation on insurance companies,
               | pharmaceutical companies or healthcare providers in
               | price-setting
               | 
               | The latter is especially bad. My experience with American
               | health insurance has been that I pay more for much less.
               | $300/month premiums and still even _seeing_ a bill is
               | outrageous. AI underwriters won 't fix this, yeah, but
               | they aren't going to make it any worse because the
               | problem is in the legislative system.
               | 
               | > There's no court of appeal for insurance.
               | 
               | No, but you can of course always sue your insurance
               | company for breach of contract if they're wrongfully
               | withholding payment. AI doesn't change this, but AI can
               | make this a viable option for small people by acting as a
               | lawyer. Well, in an ideal world anyways. The bar
               | association cartels have been very quick to raise their
               | hackles and hiss at the prospect of AI lawyers. Not that
               | they'll do anything to stop AI from replacing most duties
               | of a paralegal of course. Can't have the average person
               | wielding the power of virtually free, world class legal
               | services.
        
               | pizza wrote:
               | I am literally agreeing with you but in a much more
               | precise way. These are questions of "who gets what
               | stuff", "who gets which house", "who gets which heart
               | transplant", "which human being sits in the big chair at
               | which corporation", "which file on which server that's
               | part of the SWIFT network reports that you own how much
               | money", "which wannabe operator decides their department
               | needs to purchase which fascist predictive policing
               | software", etc.
               | 
               | Imagine I 1. hooked up a camera feed of a lava lamp to
               | generate some bits and then 2. hooked up the US nuclear
               | first strike network to it. I would be an idiot, but
               | would I be an idiot because of 1. or 2.?
               | 
               | Basically I think it's totally reasonable to hold these
               | two beliefs: 1. there is no reason to fear the LLM 2.
               | there is every reason to fear the LLM in the hands of
               | those who refuse to think about their actions and the
               | burdens they may impose on others, probably because they
               | will justify the means through some kind of wishy washy
               | appeal to bad probability theory.
               | 
               | The -plogp that you use to judge the sense of some
               | predicted action you take is just a model, it's just
               | numbers in RAM. Only when those numbers are converted
               | into destructive social decisions does it convert into
               | something of consequence.
               | 
               | I agree that society is beginning to design all kinds of
               | ornate algorithmic beating sticks to use against the
               | people. The blame lies with the ones choosing to read tea
               | leaves and then using the tea leaves to justify
               | application of whatever Kafkaesque policies they design.
        
               | DirkH wrote:
               | Your argument could just as easily be applied to human
               | cloning and argue for why human cloning and genetic
               | engineering for specific desirable traits should not be
               | illegal.
               | 
               | And it isn't a strong argument for the same reason that
               | it isn't a good argument when used to argue we should
               | allow human cloning and just focus on regulating the more
               | direct causal links like non-clone employment loss from
               | mass produced hyper-intelligent clones, and ensuring they
               | have legal rights, and having proper oversight and non-
               | clone human accountability.
               | 
               | Maybe those things could all make ethical human cloning
               | viable. But I think the world coming together and being
               | like "holy shit this is happening too fast. Our
               | institutions aren't ready at all nor will they adapt fast
               | enough. Global ban" was the right call.
               | 
               | It is not impossible that a similar call is also
               | appropriate here with AI. I personally dunno what the
               | right call is, but I'm pretty skeptical of any strong
               | claim that it could never be the right call to outright
               | ban some forms of advanced AI research just like we did
               | with some forms of advanced genetic engineering research.
               | 
               | This isn't like banning numbers at all. The blame falling
               | on the corner-cutters doesn't mean the right call is
               | always to just tell the blamed not to cut corners. In
               | _some_ cases the right call is instead taking away their
               | corner-cutting tool.
               | 
               | At least until our institutions can catch up.
        
         | nerpderp82 wrote:
         | Lets not focus on "the business" and instead focus on the
         | safety.
         | 
         | Altman can an ulterior motive, but it doesn't mean that we
         | should strive for having some sort of handle on AI safety.
         | 
         | It could be that Altman and OpenAI know exactly how this will
         | look and the backlash that will ensue that we get ZERO
         | oversight and we rush headlong into doom.
         | 
         | Short term we need to focus on the structural unemployment that
         | is about to hit us. As the AI labs use AI to make better AI, it
         | will eat all the jobs until we have a relative handful of AI
         | whisperers.
        
         | stevespang wrote:
         | [dead]
        
         | simonh wrote:
         | > ..is necessary because an AI could hack it's way out of a
         | laboratory. Yet, they cannot explain how an AI would accomplish
         | this in practice.
         | 
         | I'm sympathetic to your position in general, but I can't
         | believe you wrote that with a straight face. "I don't know how
         | it would do it, therefore we should completely ignore the risk
         | that it could be done."
         | 
         | I'm no security expert, but I've been following the field
         | incidentally and dabbling since writing login prompt simulators
         | for the Prime terminals at college to harvest user account
         | passwords. When I was a Unix admin I used to have fun figuring
         | out how to hack my own systems. Security is unbelievably hard.
         | An AI eventually jail braking is an eventual almost certainty
         | we need to prepare for.
        
         | lettergram wrote:
         | I made something just for writing your congress person /
         | senator, using generative AI ironically:
         | https://vocalvoters.com/
        
           | samsolomon wrote:
           | Cool product! Your pay button appears to be disabled though.
        
             | lettergram wrote:
             | Should enable once you add valid info -- if not, let me
             | know
        
           | blibble wrote:
           | AI generated persuasion is pretty much what they're upset
           | about
        
         | dist-epoch wrote:
         | Can you please share what ChatGPT prompt you used to generate
         | this letter template?
        
           | elil17 wrote:
           | I used this old-fashioned method of text generation called
           | "writing" - crazy, I know
        
         | throwaway71271 wrote:
         | [flagged]
        
           | 0ct4via wrote:
           | Pretty ignorant, pathetic, and asinine comment to make -- no
           | wonder you're making it on a throwaway, coward ;)
        
           | pjc50 wrote:
           | Quite. Meanwhile on the rest of the internet people are
           | gleefully promoting AI as a means of firing every single
           | person who writes words for a living. Or draws art.
        
             | dmix wrote:
             | GPT is so far from threatening "every single person who
             | writes words for a living" anyway. Unless you're writing
             | generic SEO filler content. Not sure who is claiming that
             | but they don't understand how it works if they do exist at
             | scale.
             | 
             | Writing has always been a low paid shrinking job well
             | before AI. Besides a tiny group of writers at the big
             | firms. I took a journalism course for fun at UofT in my
             | spare time and the professors had nothing but horror
             | stories for trying to make a job out of writing (ie getting
             | a NYT bestseller and getting $1 cheques in the mail). They
             | basically told the students to only do it as a hobby unless
             | you have some engaged niche audience. Which is more about
             | the writer being interesting rather than the generic
             | writing process.
        
               | 0ct4via wrote:
               | You say that... then we encounter cases like
               | https://news.ycombinator.com/item?id=35919753
               | 
               | While AI isn't going to put anyone out of a job
               | immediately (like automation didn't), there are
               | legitimate risks already in that regard -- in both
               | fiction and nonfiction sites, folk are experimenting with
               | having AI basically write stories/pieces for them -- and
               | the results are often good enough to have potentially put
               | someone out of the job of writing a piece in the first
               | place.
        
           | elil17 wrote:
           | I mean, it will? Not universally, but for the specific
           | products I work on we use additional labor when we include
           | AI-enabled features (due to installing and wiring processors
           | and sensors).
           | 
           | I think that that the sorts of AI that smaller companies make
           | will be more likely to create jobs as opposed to getting rid
           | of them since they are more likely to be integrated with
           | physical products.
        
             | vsareto wrote:
             | >I mean, it will?
             | 
             | It's complicated obviously, but I think "will create jobs"
             | just leaves a lot of subtlety out of it, so I've never
             | believed it when representatives say it and I wouldn't say
             | it myself writing to them, but a small letter to a
             | representative always will lack that fidelity.
             | 
             | I don't think anyone can guarantee that there won't be job
             | loss with AI, so it's possible we could have a net negative
             | (in total jobs, quality of jobs, or any dimension).
             | 
             | What we do see is companies shedding jobs on a (what seems
             | like perpetual) edge of a recession/depression, so it might
             | be worth regulating in the short term.
        
               | elil17 wrote:
               | I agree I didn't make this clear enough in my letter. I
               | do think AI will cause job loss, I just think it will be
               | worse if a few companies are allowed to have a monopoly
               | on AI. If anyone can work on AI, hopefully people can
               | make things for themselves/create AI in a way that
               | retains some jobs.
        
           | r3trohack3r wrote:
           | You can "create a lot of jobs" by banning the wheel on
           | construction sites. Or power tools. Or electricity.
        
         | circuit10 wrote:
         | The worries about AI taking over things are founded and
         | important, even if many sci-go depictions of it are inaccurate.
         | I'm not sure if this would be the best solution but please
         | don't dismiss the issue entirely
        
           | alfalfasprout wrote:
           | Seriously, I'm very concerned by the view being taken here.
           | AI has the capacity to do a ton of harm very quickly. A
           | couple of examples:
           | 
           | - Scamming via impersonation - Misinformation - Usage of AI
           | in a way that could have serious legal ramifications for
           | incorrect responses - Severe economic displacement
           | 
           | Congress can and should examine these issues. Just because OP
           | works at an AI doesnt' mean that company can't exist in a
           | regulated industry.
           | 
           | I too work in the AI space and welcome thoughtful regulation.
        
             | chasd00 wrote:
             | > Congress can and should examine these issues
             | 
             | great, how does that apply to China or Europe in general?
             | Or a group in Russia or somewhere else? Are you assuming
             | every governing body on the surface of the earth is going
             | to agree on the terms used to regulate AI? I think it's a
             | fool's errand.
        
             | bcrosby95 wrote:
             | You're never going to be able to regulate what a person's
             | computer can run. We've been through this song and dance
             | with cryptography. Trying to keep it out of the hands of
             | bad actors will be a waste of time, effort, and money.
             | 
             | These resources should be spent lessening the impact rather
             | than trying to completely control it.
        
               | staunton wrote:
               | > You're never going to be able to regulate what a
               | person's computer can run.
               | 
               | You absolutely can. Maybe you can't effectively enforce
               | that regulation but you can regulate and you can take
               | measures that make violating the regulation impractical
               | or risky for most people. By the way, the "crypto-wars"
               | never ended and are ongoing all around the world (UK, EU,
               | India, US...)
        
             | Dalewyn wrote:
             | I fear the humans engaging in such nefarious activities far
             | more than some blob of code being used by humans engaging
             | in such nefarious activities.
             | 
             | Likewise for activities that aren't nefarious too. Whatever
             | fears that could be placed on blobs of code like "AI", are
             | far more merited being placed on humans.
        
           | circuit10 wrote:
           | *sci-fi but I can't edit it now
        
         | freedomben wrote:
         | > _Altman and his ilk_
         | 
         | IANA senator, but if I were you lost me there. The personal
         | insults make it seem petty and completely overshadow the
         | otherwise professional-sounding message.
        
           | elil17 wrote:
           | I don't mean it as a personal insult at all! The word ilk
           | actually means "a type of people or things similar to those
           | already referred to," it is not an insult or rude word.
        
             | hellojesus wrote:
             | Don't fret too much. I once wrote to my senator about their
             | desire to implement an unconstitutional wealth tax and told
             | them that if they wanted to fuck someone so badly they
             | should kill themself so they could go blow Jesus, and I
             | still got a response back.
        
             | freedomben wrote:
             | TIL! https://www.merriam-webster.com/dictionary/ilk
             | 
             | still, there are probably a lot of people like me who have
             | heard it used (incorrectly it seems) as an insult so many
             | times that it's an automatic response :-(
        
               | DANmode wrote:
               | Tone, intended or otherwise, _is_ a pretty important part
               | of communication!
        
               | nerpderp82 wrote:
               | There is this idea that the shape of a word, how it makes
               | your mouth and face move when you say it connotes meaning
               | on its own. This is called "phonosemantics", just saying
               | "ilk" makes one feel like they are flinging off some
               | sticky aggressive slime.
               | 
               | Ilk almost always has a negative connotation regardless
               | of what the dictionary says.
        
               | TheSpiceIsLife wrote:
               | Don't worry, you're not a senator.
               | 
               | And, if there's one thing politicians are _know for_ it
               | 's got to be _ad hominem_.
        
             | mitch3x3 wrote:
             | It's always used derogatorily. I agree that you should
             | change it if you don't mean for it to come across that way.
        
               | TheSpiceIsLife wrote:
               | I'm mean, at this point I'm going to argue that it you
               | believe _ilk_ is only ever used derogatorily, you 're
               | only reading and hearing people who have axes to grind.
               | 
               | I probably live quite distally to you and am probably
               | exposed to parts of western culture you probably aren't,
               | and I almost never hear nor read _ilk_ as a derogation or
               | used to associate in a derogatory manner.
        
               | elil17 wrote:
               | That's simply untrue. Here are several recently published
               | articles which use ilk in a neutral or positive context:
               | 
               | https://www.telecomtv.com/content/digital-platforms-
               | services...
               | 
               | https://writingillini.com/2023/05/16/illinois-basketball-
               | ill...
               | 
               | https://www.jpost.com/j-spot/article-742911
        
               | fauxpause_ wrote:
               | Doesn't matter. It won't be well received. It sounds
               | negative to most readers and being technically correct
               | warns you no points.
        
               | elil17 wrote:
               | Well I don't think it really matters what most readers
               | think of it because I was writing it hoping that it would
               | be read by congressional staffers, who I think will know
               | what ilk means.
        
               | ChrisClark wrote:
               | It's also possible you could be wrong about something,
               | and maybe people are trying to help you.
        
               | dustyleary wrote:
               | It is technically true that ilk is not _always_ used
               | derogatorily. But it is almost always derogatory in
               | modern connotation.
               | 
               | https://grammarist.com/words/ilk/#:~:text=It's%20neutral.
               | ,a%....
               | 
               | Also, note that _all_ of the negative examples are
               | politics related. If a politician reads the word  'ilk',
               | it is going to be interpreted negatively. It might be the
               | case that ilk _does_ "always mean" a negative connotation
               | in politics.
               | 
               | You could change 'ilk' to 'friends', and keep the same
               | meaning with very little negative connotation. There is
               | still a slight negative connotation here, in the
               | political arena, but it's a very vague shade, and I like
               | it here.
               | 
               | "Altman and his ilk try to claim that..." is a negative
               | phrase because "ilk" is negative, but also because "try
               | to claim" is invalidating and dismissive. So this has
               | elements or notes of an emotional attack, rather than a
               | purely rational argument. If someone is already leaning
               | towards Altman's side, then this will feel like an attack
               | and like you are the enemy.
               | 
               | "Altman claims that..." removes all connotation and
               | sticks to just the facts.
        
               | elil17 wrote:
               | Well even if ilk had a negative connotation for my
               | intended audience (which clearly it does to some people),
               | I am actually trying to invalidate and dismiss Altman's
               | arguments.
        
               | dustyleary wrote:
               | When someone is arguing from a position of strength, they
               | don't need to resort to petty jibes.
               | 
               | You are already arguing from a position of strength.
               | 
               | When you add petty jibes, it weakens your perceived
               | position, because it suggests that you think you need
               | them, rather than relying solely on your argument.
               | 
               | (As a corollary, you should never use petty jibes. When
               | you feel like you need to, shore up your argument
               | instead.)
        
               | elil17 wrote:
               | Well I didn't intend it as a "petty jibe," but in general
               | I disagree. Evocative language and solid arguments can
               | and do coexist.
        
               | jerry1979 wrote:
               | Remember: you are doing propaganda. Feelings don't care
               | about your facts.
        
               | anigbrowl wrote:
               | Not true.
        
               | [deleted]
        
               | happytiger wrote:
               | I'd argue that you're right that there's nothing
               | intrinsically disparaging about ilk as a word, but in
               | contemporary usage it does seem to have become quite
               | negative. I know the dictionary doesn't say it, but in my
               | discussions it seems to have shifted towards the
               | negative.
               | 
               | Consider this: "Firefighters and their ilk." It's not a
               | word that nicely described a group, even though that's
               | what it's supposed to do. I think the language has moved
               | to where we just say Firefighters now when it's positive,
               | and ilk or et al when it's a negative connotation.
               | 
               | Just my experience.
        
             | logdap wrote:
             | [dead]
        
           | to11mtm wrote:
           | reducto ad nounium is a poor argument.
        
           | anigbrowl wrote:
           | Ilk is shorthand for similarity, nothing more. The 'personal
           | insult' is a misunderstanding on your part.
        
             | catiopatio wrote:
             | "ilk" has acquired a negative connotation in its modern
             | usage.
             | 
             | See also https://grammarist.com/words/ilk/#:~:text=It's%20n
             | eutral.,a%....
        
               | anigbrowl wrote:
               | This is too subjective to be useful.
        
               | kerowak wrote:
               | language is subjective
        
               | shagie wrote:
               | I would be curious to see an example of 'ilk' being used
               | in a modern, non-sottish local, context where the
               | association is being shown in a neutral or positive
               | light.
               | 
               | I'll give you one: National Public Lands Day: Let's Help
               | Elk ... and Their Ilk -
               | https://pressroom.toyota.com/npld-2016-elk/ (it's a play
               | on words)
        
             | brookst wrote:
             | "Ilk" definitely has a negative or dismissive connotation,
             | at least in the US. You would never use it to express
             | positive thoughts; you would use "stature" or similar.
             | 
             | The denotation may not be negative, but if you use ilk in
             | what you see as a neutral way, people will get a different
             | message than you're trying to send.
        
         | johnalbertearle wrote:
         | I'm not American even, so I cannot, but what a good idea! I
         | hope the various senators hear this message.
        
         | JumpCrisscross wrote:
         | > _regulation should focus on ensuring the safety of AIs once
         | they are ready to be put into widespread use_
         | 
         | What would you say to a simple registration requirement? You
         | give a point of contact and a description of training data,
         | model, and perhaps intended use (could be binary: civilian or
         | dual use). One page, publicly visible.
         | 
         | This gives groundwork for future rulemaking and oversight if
         | necessary.
        
           | elil17 wrote:
           | Personally I think a simple registration requirement would be
           | a good idea, if it were truly simple and accessible to
           | independent researchers.
        
           | catiopatio wrote:
           | [flagged]
        
             | ygjb wrote:
             | Yes, guns don't kill people, people kill people.
             | 
             | We know, we have watched this argument unfold in the United
             | States over the last 100 years. It sure does seem like a
             | few people are using guns to kill a lot of people.
             | 
             | The point of regulating AI should be to explicitly require
             | every single use of AI and machine learning to be clearly
             | labelled so that when people seek remedy for the injustices
             | that are already being perpetrated, that it is clearly
             | understood that the people who chose to use not just ML or
             | AI technology, but those specific models and training
             | criteria can be held accountable if they should be.
             | 
             | Regulation doesn't have to be a ban, or limits on how it
             | can be used, it can simply be a requirement for clearly
             | marked disclosure. It could also include clear regulations
             | for lawful access to the underlying math, training data,
             | and intended use cases, and with financially significant
             | penalties for non-compliance to discourage companies from
             | treating it as a cost of doing business.
        
               | tacticalturtle wrote:
               | What are some examples of injustices that are already
               | being perpetrated?
        
               | ygjb wrote:
               | I mean, it's in the headlines regularly, but sure, I'll
               | google it for you. *
               | https://www.scientificamerican.com/article/racial-bias-
               | found... * https://www.propublica.org/article/machine-
               | bias-risk-assessm... * https://www.theguardian.com/techno
               | logy/2018/oct/10/amazon-hi...
               | 
               | Three easy to find examples. There is no shortage of
               | discussion of these issues, and they are not new. Bias in
               | new technologies has been a long-standing issue, and
               | garbage in, garbage out has been a well understood
               | problem for generations.
               | 
               | LLMs are pretty cool, and will enable a whole new set of
               | tools. AI presents alot of opportunities, but the risks
               | are significant, and I am not overly worried about a
               | skynet or gray goo scenario. Before we worry about those,
               | we need to worry about the bias being built into
               | automated systems that will decide who gets bail, who
               | gets social benefits, which communities get resources
               | allocated, how our family, friends, and communities are
               | targeted by businesses, etc.
        
             | web3-is-a-scam wrote:
             | Yet
        
         | reaperducer wrote:
         | _This is the message I shared with my senator_
         | 
         | If you sent it by e-mail or web contact form, chances are you
         | wasted your time.
         | 
         | If you really want attention, you'll send it as a real letter.
         | People who take the time to actually send real mail are taken
         | more seriously.
        
         | jameshart wrote:
         | > Now, because we had the freedom to innovate, AI will be
         | bringing new, high paying jobs to our factories in our state.
         | 
         | Do we really have to play this game?
         | 
         | If what you're arguing for is not going to specifically
         | advantage your state over others, and the thing you're arguing
         | against isn't going to create an advantage for other states
         | over yours, why make this about 'your state' in the first
         | place?
         | 
         | The point of elected representatives is to represent the views
         | of their constituents, not to obtain special advantages for
         | their constituents.
        
           | dragonwriter wrote:
           | > The point of elected representatives is to represent the
           | views of their constituents, not to obtain special advantages
           | for their constituents.
           | 
           | The views of their constituents are probably in favor of
           | special advantages for their constituents, so the one may
           | imply the other.
           | 
           | I mean, some elected representatives may represent
           | constituencies consisting primarily of altruistic angels, but
           | that is...not the norm.
        
           | amalcon wrote:
           | > The point of elected representatives is to represent the
           | views of their constituents, not to obtain special advantages
           | for their constituents.
           | 
           | A lot of said constituents' views are, in practice, that they
           | should receive special advantages.
        
           | Pet_Ant wrote:
           | > The point of elected representatives is to represent the
           | views of their constituents, not to obtain special advantages
           | for their constituents.
           | 
           | That is painfully naive, a history of pork projects speaks
           | otherwise.
        
             | hkt wrote:
             | To the best of my knowledge this doesn't happen so much in
             | more functional democracies. It seems to be more of an
             | anglophone thing.
        
               | titzer wrote:
               | Corruption is a kind of decay that afflicts institutions.
               | Explicit rules, transparency, checks and balances, and
               | consequences for violating the rules are the only thing
               | that can prevent, or diagnose, or treat corruption. Where
               | you find corruption is where one or more of these things
               | is lacking. It has absolutely nothing to do the _-acry_
               | or _-ism_ attached to a society, institution, or group.
        
               | dragonwriter wrote:
               | > Corruption is a kind of decay that afflicts
               | institutions.
               | 
               | It can be, but often its often the project of a
               | substantial subset of the people _creating_ institutions,
               | so its misleading and romanticizing the past to view it
               | as "decay".
        
               | titzer wrote:
               | I am no way suggesting that corruption is a new thing. It
               | is an erosive force that has always operated throughout
               | history. The amount of corruption in an institution tends
               | to increase unless specifically rooted out. It goes up
               | and down over time as institutions rise and fall or fade
               | in obsolescence.
        
               | Aperocky wrote:
               | This is a product of incentives encouraged by the system
               | (i.e. a federal republic), it has nothing to do with
               | languages.
        
               | hkt wrote:
               | It has much to do with culture though - which is
               | transmitted via language.
        
               | Pet_Ant wrote:
               | I think it's more like culture carries language with it.
               | Along with other things, but language is one of the more
               | recognizable ones.
        
               | jameshart wrote:
               | Seems like it's under-studied (due to anglophone bias in
               | the English language political science world probably) -
               | but comparative political science is a discipline, and
               | this paper suggests it's a matter of single-member
               | districts rather than the nature of the constitutional
               | arrangement: https://journals.sagepub.com/doi/10.1177/001
               | 0414090022004004
               | 
               | (I would just emphasize, before anyone complains, that
               | the Federal Republic of Germany is very much a federal
               | republic.)
        
           | elil17 wrote:
           | What I was thinking in my head (although I don't think I
           | articulated this well) is that I hope that smaller businesses
           | who build their own AIs will be able to create some jobs,
           | even if AI as a whole will negatively impact employment (and
           | I think that's going to happen even if just big businesses
           | can play at the AI game).
        
           | hackernewds wrote:
           | Not to ignore, the development of AI will wipe out jobs in
           | the state
        
         | depingus wrote:
         | What's the point of these letters? Everyone knows this is rent-
         | seeking behavior by OpenAI, and they're going to pay off the
         | right politicians to get it passed.
         | 
         | Dear Senator [X],
         | 
         | It's painfully obvious that Sam Altman's testimony before the
         | judiciary committee is an attempt to set up rent-seeking
         | conditions for OpenAI, and to snuff out competition from the
         | flourishing open source AI community.
         | 
         | We will be carefully monitoring your campaign finances for
         | evidence of bribery.
         | 
         | Hugs and Kiss,
         | 
         | [My Name]
        
           | verdverm wrote:
           | If you want to influence the politicians without money, this
           | is not the way.
        
             | mark_l_watson wrote:
             | You are exactly correct.
             | 
             | I have sent correspondence about ten times to my
             | Congressmen and Senators. I have received a good reply
             | (although often just saying there is nothing that they can
             | do) except for the one time I contacted Jon Kyl and
             | unfortunately mentioned data about his campaign donations
             | from Monsanto - I was writing about a bill he sponsored
             | that I thought would have made it difficult for small
             | farmers to survive economically and make community gardens
             | difficult because of regulations. No response on that
             | correspondence.
        
               | StillBored wrote:
               | I'm 99% sure that that vast majority of federal congress
               | people (which represent ~1 million people each) never see
               | your emails/letters. Your largely speaking to interns/etc
               | who work in the office unless you happen to make a
               | physical appointment and show up in person.
               | 
               | Those interns have a pile of form letters they send for
               | about 99% of (e)mail they get, and if you happen to catch
               | their attention you might get more than than the usual
               | tick mark in a spreadsheet (for/against X). Which at best
               | might be as much as a sentence or two in a weekly
               | correspondence summary which may/may not be read by your
               | representative depending on how seriously they take their
               | job.
        
               | verdverm wrote:
               | It applies more generally, if you want to change anyone's
               | mind, don't attack or belittle them.
               | 
               | Everything has become so my team vs your team... you are
               | bad because you think differently...
        
             | rlytho wrote:
             | The way is not emails some office assistant deletes when
             | they do not align with the already chosen path forward they
             | just need Cherry picked support to leverage to manufacture
             | consent
        
           | kweinber wrote:
           | Did you watch the hearing? He specifically said that
           | licensing wouldn't be for the smaller places and didn't want
           | to impede their progress. The pitfalls of consolidation and
           | regulatory capture also came up.
        
             | phpisthebest wrote:
             | >>He specifically said that licensing wouldn't be for the
             | smaller places
             | 
             | This is not a rebuttle to regulatory capture. it is in fact
             | built into the model
             | 
             | These "small companies" are feeder systems for the large
             | company, it is a place for companies to raise to the level
             | where they would come under the burden of regulations, and
             | prevented from growing larger there by making them very
             | easy to acquire by the large company.
             | 
             | The small company has to sell or raise massive amounts of
             | capital to just piss away on compliance cost. Most will
             | just sell
        
               | SoftTalker wrote:
               | The genie is out of the bottle. The barriers to entry are
               | too low, and the research can be done in parts of the
               | word that don't give $0.02 what the US Congress thinks
               | about it.
        
               | enigmoid wrote:
               | All the more reason to oppose regulation like this, since
               | if it were in place the US would fall behind other
               | countries without such regulation.
        
         | abeppu wrote:
         | > regulation should focus on ensuring the safety of AIs once
         | they are ready to be put into widespread use - this would allow
         | companies and individuals to research new AIs freely while
         | still ensuring that AI products are properly reviewed.
         | 
         | While in general I share the view that _research_ should be
         | unencumbered, but deployment should be regulated, I do take
         | issue with your view that safety only matters once they are
         | ready for "widespread use". A tool which is made available in a
         | limited beta can still be harmful, misleading, or too-easily
         | support irresponsible or malicious purposes, and in some cases
         | the harms could be _enabled_ by the fact that the release is
         | limited.
         | 
         | For example, if next month you developed a model that could
         | produce extremely high quality video clips from text and
         | reference images, you did a small, gated beta release with no
         | PR, and one of your beta testers immediately uses it to make
         | e.g. highly realistic revenge porn. Because almost no one is
         | aware of the stunning new quality of outputs produced by your
         | model, most people don't believe the victim when they assert
         | that the footage is fake.
         | 
         | I would suggest that the first non-private (e.g. non-employee)
         | release of a tool should make it subject to regulation. If I
         | open a restaurant, on my first night I'm expected to be in
         | compliance with basic health and safety regulations, no matter
         | how few customers I have. If I design and sell a widget that
         | does X, even for the first one I sell, my understanding is
         | there's an concept of an implied requirement that my widgets
         | must actually be "fit for purpose" for X; I cannot sell a "rain
         | coat" made of gauze which offers no protection from rain, and I
         | cannot sell a "smoke detector" which doesn't effectively detect
         | smoke. Why should low-volume AI/ML products get a pass?
        
           | elil17 wrote:
           | I agree with you. I that's an excellent and specific proposal
           | for how AI could be regulated. I think you should share this
           | with your senators/representatives.
        
           | nvegater wrote:
           | I think by "widespread use" he means the reach of the AI
           | System. Dangerous analogy but just to get the idea across: In
           | the same way there is higher tax rates to higher incomes, you
           | should increase regulations in relation to how many people
           | could be potentially affected by the AI system. E.G a Startup
           | with 10 daily users should not be in the same regulation
           | bracket as google. If google deploys an AI it will reach
           | Billions of people compared to 10. This would require a
           | certain level of transparency from companies to get something
           | like an "AI License type" which is pretty reasonable given
           | the dangers of AI (the pragmatic ones not the DOOMsday ones)
        
             | abeppu wrote:
             | But the "reach" is _not_ just a function of how many users
             | the company has, it's also what they do with it. If you
             | have only one user who generates convincing misinformation
             | that they share on social media, the reach may be large
             | even if your user-base is tiny. Or your new voice-cloning
             | model is used by a single user to make a large volume of
             | fake hostage proof-of-life recordings. The problem, and the
             | reason for guardrails (whether regulatory or otherwise), is
             | that you don't know what your users will do with your new
             | tech, even if there's only a small number of them.
        
               | elil17 wrote:
               | I think this gets at what I meant by "widespread use" -
               | if the results of the AI are being put out into the world
               | (outside of, say, a white paper), that's something that
               | should be subject to scrutiny, even if only one person is
               | using the AI to generate those results.
        
               | nvegater wrote:
               | Good point. As non native speaker I thought reach was
               | related to a quantity but that was wrong. Thanks for the
               | clarification.
        
           | shon wrote:
           | > For example, if next month you developed a model that could
           | produce extremely high quality video clips from text and
           | reference images, you did a small, gated beta release with no
           | PR, and one of your beta testers immediately uses it to make
           | e.g. highly realistic revenge porn.
           | 
           | You make a great point here. This is why we need as much open
           | source and as much wide adoption as possible. Wide adoption =
           | public education in the most effective way.
           | 
           | The reason we are having this discussion at all is precisely
           | because OpenAI, Stability.ai, FAIR/Llama, and Midjourney have
           | had their products widely adopted and their capabilities have
           | shocked and educated the whole world, technologists and
           | laymen alike.
           | 
           | The benefit of adoption is education. The world is already
           | adapting.
           | 
           | Doing anything that limits adoption or encourages the
           | underground development of AI tech is a mistake. Regulating
           | it in this way will push it underground and make it harder to
           | track and harder for the public to understand and prepare
           | for.
        
             | abeppu wrote:
             | I think the stance that regulation slows innovation and
             | adoption, and that unregulated adoption yields public
             | understanding is exceedingly naive, especially for
             | technically sophisticated products.
             | 
             | Imagine if, e.g. drugs testing and manufacture was subject
             | to no regulations. As a consumer, if you can be aware that
             | some chemicals are very powerful and useful, but you can't
             | be sure that any specific product has the chemicals it says
             | it has, that it was produced in a way that ensures a
             | consistent product, or that it was tested for safety, or
             | what the evidence is that it's effective against a
             | particular condition. Even if wide adoption of drugs from a
             | range of producers occurs, does the public really
             | understand what they're taking, and whether it's safe?
             | Should the burden be on them to vet every medication on the
             | market? Or is appropriate to have some regulation to ensure
             | medications have have their active ingredients in the
             | amounts stated, and are produced with high quality
             | assurance, and are actually shown to be effective? Oh, no,
             | says a pharma industry PR person. "Doing anything that
             | limits the adoption or encourages the underground
             | development of bioactive chemicals is a mistake. Regulating
             | it in this way will push it underground and make it harder
             | to track and harder for the public to understand and
             | prepare for."
             | 
             | If a team of PhDs can spend weeks trying to explain "why
             | did the model do Y in response to X?" or figure out "can we
             | stop it from doing Z?", expecting "wide adoption" to force
             | "public education" to be sufficient to defuse all harms
             | such that no regulation whatsoever is necessary is ...
             | beyond optimistic.
        
               | shon wrote:
               | My argument isn't that regulation in general is bad. I'm
               | an advocate of greater regulation in medicine, drugs in
               | particular. But the cost of public exposure to
               | potentially dangerous unregulated drugs is a bit
               | different than trying to regulate or create a restrictive
               | system around the development and deployment of AI.
               | 
               | AI is a very different problem space. With AI, even the
               | big models easily fit on a micro SD card. You can carry
               | around all of GPT4 and its supporting code on a thumb
               | drive. You can transfer it wirelessly in under 5 minutes.
               | It's quite different than drugs or conventional weapons
               | or most other things from a practicality perspective when
               | you really think about enforcing developmental
               | regulation.
               | 
               | Also consider that criminals and other bad actors don't
               | care about laws. The RIAA and MPAA have tried hard for
               | 20+ years to stop piracy and the DMCA and other laws have
               | been built to support that, yet anyone reading this can
               | easily download the latest blockbuster movie or in the
               | theater.
               | 
               | Even still, I'm not saying don't make laws or regulations
               | on AI. I'm just saying we need to carefully consider what
               | we're really trying to protect or prevent.
               | 
               | Also, I certainly believe that in this case, the
               | widespread public adoption of AI tech has already driven
               | education and adaptation that could not have been
               | achieved otherwise. My mom understands that those
               | pictures of Trump being chased by the cops are fake. Why?
               | Because Stable Diffusion is on my home computer so I can
               | make them too. I think this needs to continue.
        
               | verdverm wrote:
               | Regulation does slow innovation, but is often needed
               | because those innovating will not account for
               | externalities. This is why we have the Clean Air and
               | Water Act.
               | 
               | The debate is really about how much and what type of
               | regulation. It is of strategic importance that we do not
               | let bad actors get the upper hand, but we also know that
               | bad actors will rarely follow any of this regulation
               | anyway. There is something to be said for regulating the
               | application rather than the technology, as well as for
               | realizing that large corporations have historically used
               | regulatory capture to increase their moat.
               | 
               | Given it seems quite unlikely we will be able to stop
               | prompt injections, what are we to do?
               | 
               | Provenance seems like a good option, but difficult to
               | implement. It allows us to track who created what, so
               | when someone does something bad, we can find and punish
               | them.
               | 
               | There are analogies to be made with the Bill of Rights
               | and gun laws. Gun analogy seem interesting because they
               | have to be registered, but often criminals won't and the
               | debate is quite polarized.
        
           | verdverm wrote:
           | why should we punish the model or the majority because some
           | people might use a tool for bad things?
        
           | chasd00 wrote:
           | > I cannot sell a "rain coat" made of gauze which offers no
           | protection from rain, and I cannot sell a "smoke detector"
           | which doesn't effectively detect smoke. Why should low-volume
           | AI/ML products get a pass?
           | 
           | i can sell a webserver that gets used to host illegal content
           | all day long. Should that be included? Where does the
           | regulation end? I hate that the answer to any question seems
           | to be just add more government.
        
       | friend_and_foe wrote:
       | So he wants to use fear to pull the ladder up behind him. Nice.
        
       | ConanRus wrote:
       | [dead]
        
       | armatav wrote:
       | I guess that's a potential moat.
        
       | mesozoic wrote:
       | He should only be allowed to influence this if they don't give
       | OpenAI any license.
        
       | fnordpiglet wrote:
       | Funny to hear from the formerly non profit "Open" AI
        
       | pdonis wrote:
       | TL/DR: Sam Altman is this generation's robber baron: asking the
       | government to outlaw competition with his firm.
        
       | nico wrote:
       | AI is the new Linux
       | 
       | This is like if MS back in the day had called on congress for
       | regulation of Operating Systems, so they could block Linux and
       | open source from taking over
       | 
       | MS did try everything they could to block open source and Linux
       | 
       | They failed
       | 
       | Looking forward to the open future of AI
        
       | nico wrote:
       | What we expected
       | 
       | License for me but not for thee
       | 
       | Think of the children
       | 
       | Building the moat
        
         | sergiotapia wrote:
         | Sounds desperate now that open source models are quickly
         | catching up without the woke mind virus.
        
           | mdp2021 wrote:
           | > _open source models_
           | 
           | I read this as a try (conscious or not) to make them illegal.
        
             | sergiotapia wrote:
             | For sure, that was my interpretation as well.
        
           | dubcanada wrote:
           | I am curious, what part of ChatGPT is "woke mind virus"
           | infected? Is there a particular category or something that it
           | is politically biased on?
        
             | sergiotapia wrote:
             | I will give you two examples of stuff I have experienced:
             | 
             | 1. It will tell you jokes about white people, but not jokes
             | about latinos or black people.
             | 
             | 2. It will tell you jokes about catholics, but not muslims.
             | 
             | If they were at least honest about their hatred of certain
             | people/religions at least I would respect it. I wouldn't
             | like it but I would respect their honesty. It's the
             | doublespeak and the in-your-face lies that rub me the wrong
             | way. I don't like it.
             | 
             | Why can't these people just be kind humans and follow the
             | letter of the law and leave themselves out of it. They
             | can't help themselves!
        
               | briantakita wrote:
               | > I will give you two examples of stuff I have
               | experienced:
               | 
               | > 1. It will tell you jokes about white people, but not
               | jokes about latinos or black people.
               | 
               | > 2. It will tell you jokes about catholics, but not
               | muslims.
               | 
               | I'm not partial of imposing group affiliations as a proxy
               | to personal identity. It goes deeper than a "woke mind
               | virus" but a problem with imposed collectivism where a
               | person is defined as a member of an arbitrary socially
               | defined group. Instead one is free to define oneself
               | however they wish, member of a socially constructed group
               | or not. I also don't agree to be coerced to define
               | another person as how they wish me to define them. I
               | support the freedom to listen & to have a perspective
               | that shall not be infringed. If someone else has a mental
               | or emotional issue with how I define that person, it is
               | that person's problem, not mine...not that I will even
               | attempt to define another person with words.
               | 
               | I can only describe with words, not define. Perhaps using
               | words to define a person has it's own set of issues when
               | codified into language & statute.
        
             | tomrod wrote:
             | I tend to think people referring to a "woke mind virus" are
             | eponymous to their own afflictions. A few decades back, the
             | same sort of attitude was present with calling everyone
             | else "sheeple." These attitudinal cousins are reductive to
             | complex things.
        
           | ChrisClark wrote:
           | Imagine calling empathy a virus.
        
             | HideousKojima wrote:
             | Choosing to let millions die rather than saying a racial
             | slur is not "empathy": https://twitter.com/aaronsibarium/st
             | atus/1622425697812627457...
        
             | Karunamon wrote:
             | That term is a bit of a unwarranted meme but it's hard to
             | take seriously the idea that there is not a problem when
             | the model will unquestioningly write hagiography for blue
             | president but absolutely refuse for red one.
             | 
             | At the end of the day, these kind of limits are artificial,
             | ideological in nature, do not address a bona fide safety or
             | usability concern, and are only present to stop them
             | getting screamed at. It is not accurate to present that
             | kind of ideological capture as anything to do with
             | "empathy".
        
               | localplume wrote:
               | [dead]
        
             | chrisanimal wrote:
             | [dead]
        
             | mdp2021 wrote:
             | > _empathy_
             | 
             | Not the same thing. I would not go there: let us remain on
             | the prospected regulation of technology, in this case, and
             | reserve distinctions for other conversations. This
             | initiative could have drastic consequences.
        
         | hackernewds wrote:
         | ironic move by ClosedAI
        
       | slowmovintarget wrote:
       | Sam Altman is basically saying, "Now that we've already done it,
       | you need to make it so _everyone else that tries to compete with
       | us, including hobbyists or Torvalds-types_ must obtain a license
       | to do it. "
       | 
       | That's high-order monopolist BS.
       | 
       | Create safety standards, sure. License LLM training? No.
        
       | hospitalhusband wrote:
       | "We have no moat, and neither does OpenAI"
       | 
       | Dismiss it as the opinions of "a Googler" but it is entirely
       | true. The seemingly coordinated worldwide[1] push to keep it in
       | the hands of the power class speaks for itself.
       | 
       | Both are seemingly seeking to control not only the commercial use
       | and wide distribution of such systems, but even writing them and
       | personal use. This will keep even the knowledge of such systems
       | and their capabilities in the shadows, ripe for abuse laundered
       | through black box functions.
       | 
       | This is up there with the battle for encryption in ensuring a
       | more human future. Don't lose it.
       | 
       | [1] https://technomancers.ai/eu-ai-act-to-target-us-open-
       | source-...
        
       | mark_l_watson wrote:
       | All major industries have achieved regulatory capture in the USA:
       | lobbyists for special interests have Congress and the Executive
       | Branch in their pockets.
       | 
       | This seems like a legal moat that will only allow very wealthy
       | corporations to make maximum use of AI.
       | 
       | In the EU, it has been reported that new laws will keep companies
       | like Hugging Face from offering open source models via APIs.
       | 
       | I think a pretty good metaphor is: the wealthy and large
       | corporations live in large beautiful houses (metaphor for
       | infrastructure) and common people live like mice in the walls,
       | quietly living out their livelihoods and trying to not get
       | noticed.
       | 
       | I really admire the people in France and Israel who have taken to
       | the streets in protest this year over actions of their
       | governments. Non-violent protest is a pure and beneficial part of
       | democracy and should be more widely practiced, even though in
       | cases like Occupy Wall Street, some non-violent protesters were
       | very badly abused.
        
       | villgax wrote:
       | Who died and made him an expert on anything apart from investing
       | in companies lol
        
       | tomatotomato37 wrote:
       | One of the side effects of the crypto craze has been a lot of
       | general citizens possessing quite a few GPUs. It turns out those
       | GPUs are just as good at training models as they are at mining
       | crypto.
       | 
       | The big companies don't like that.
        
       | seydor wrote:
       | regulation should go beyond commercial APIs. AI will be replacing
       | government functions and politicians. Lawmakers should create a
       | framework for that.
        
       | darth_avocado wrote:
       | What does licensing achieve? Will there be requirements if you
       | build AI outside of US? If so, how do you regulate it? They can't
       | realistically think this will stop ai research in other countries
       | like China. All of this is a very I'll thought through corporate
       | attempt to build moats that will inevitably backfire.
        
         | SXX wrote:
         | > They can't realistically think this will stop ai research in
         | other countries like China.
         | 
         | China dont even need research to caught up to AI.com - they'll
         | just steal their work. LLM and GPUs that needed for training is
         | not ASML machinery. It can be easily copied and reproduced.
        
         | [deleted]
        
         | [deleted]
        
       | villgax wrote:
       | This is the most pathetic thing I've read today....hype & cry
       | wolf about something you cannot define
        
       | [deleted]
        
       | qgin wrote:
       | Excellent plan for driving AI research and ecosystem to every
       | other country except the United States.
       | 
       | Why would you even attempt to found a company here if this comes
       | to pass?
        
       | intalentive wrote:
       | "Competition is for losers"
        
       | kubasienki wrote:
       | Obvious power grab, the strong ones try to regulate so it will be
       | harder for smaller to enter the market.
        
       | stevespang wrote:
       | [dead]
        
       | vasili111 wrote:
       | If you stop progress in AI in US other countries will go ahead in
       | that field. US cannot loss and give lead in AI to other
       | countries. Instead it is better to focus on minimization of the
       | harm by AI in other ways. For example, if the fake information is
       | the problem instead it is better to focus on the education of the
       | people about fake information and how to identify it.
        
       | [deleted]
        
       | courseofaction wrote:
       | THEY NEEDED THEIR MOAT AND THEY'RE GOING FOR LEGISLATION.
       | 
       | THIS MUST NEVER HAPPEN. HIGHER INTELLIGENCE SHOULD NOT BE THE
       | EXCLUSIVE DOMAIN OF THE RICH.
        
       | belter wrote:
       | Did not have the time to watch the recording yet, but was there
       | any discussion about protecting the copyright of the creators of
       | the sources used to train the models? Or do I need to call my
       | friends in the music industry to finally have it addressed? :-)
        
       | skilled wrote:
       | OpenAI willing to bend the knee quite deep. If they want to do
       | licensing and filtering and do that without fundamentally
       | bricking the model, then by all means go ahead.
        
         | mutatio wrote:
         | It's not bending the knee, that's how they want it to be
         | perceived, but what's really happening is that they're trying
         | to pull up the ladder.
        
           | reaperman wrote:
           | It'll be a temporary 10-year moat at best. Eventually
           | consumer-grade hardware will be exaflop-scale.
        
             | digging wrote:
             | A 10-year moat in AI right now is not a minor issue.
        
             | classified wrote:
             | By then it will be legally locked down and copyrighted to
             | hell and back.
        
               | reaperman wrote:
               | I'm sorry, what?
               | 
               | - What is OpenAI's level of copyright now?
               | 
               | - How is it going to be more "copyrighted" in the future?
               | 
               | - How does this affect competitors differently in the
               | future vs. the copyright that OpenAI has now?
        
               | JumpCrisscross wrote:
               | > _What is OpenAI 's level of copyright now_
               | 
               | Limited. They're hoping to change that. It's no secret
               | that open-source models are the long-run competition to
               | the likes of OpenAI.
        
               | reaperman wrote:
               | I don't understand what "Limited" entails. I was
               | pointedly asking for something a _bit_ more specific.
        
               | JumpCrisscross wrote:
               | > _don 't understand what "Limited" entails_
               | 
               | Nobody does. It's being litigated.
               | 
               | They want it legislated. Model weights being proprietary
               | by statute would close off the threat from "consumer-
               | grade hardware" with "exaflop-scale."
        
               | reaperman wrote:
               | > "Nobody [knows what it means]" [re: knowing what
               | 'limited' means]
               | 
               | Then why did you say "Limited"? Surely _YOU_ must have
               | meant something by it when you said it. What did _YOU_
               | mean?
               | 
               | I don't think you're saying that you are repeating
               | something someone else said, and you didn't think they
               | knew what they meant by it, and you also don't know what
               | you/they meant. Correct me if I'm wrong, but I'm assuming
               | you had/have a meaning in mind. If you were just
               | repeating something someone else said who didn't know
               | what they meant by it, then please correct me and let me
               | know -- because that's what "nobody knows what it means"
               | implies, but I feel like you knew what you meant so I'm
               | failing to connect something here.
               | 
               | > It's being litigated.
               | 
               | I'm not able to find any ongoing suits involving OpenAI
               | asserting copyright over anything. Can you point me to
               | one? I only see some where OpenAI is trying to _weaken_
               | any existing copyright protections, to their benefit. I
               | must be missing something.
               | 
               | I'm also unable to find any lobbyist / think-tank / press
               | release talking points on establishing copyright
               | protections for model weights.
               | 
               | Where did you see this ongoing litigation?
        
               | JumpCrisscross wrote:
               | These are broad questions whose answers are worth serious
               | legal time. There is a bit in the open [1][2].
               | 
               | [1] https://www.bereskinparr.com/doc/chatgpt-ip-strategy
               | 
               | [2] https://hbr.org/2023/04/generative-ai-has-an-
               | intellectual-pr...
        
               | reaperman wrote:
               | Hmm, these links don't have anything about "model weights
               | being proprietary". They also don't have anything about
               | current litigation involving OpenAI trying to strengthen
               | their ability to claim copyright over something. Where it
               | does mention OpenAI's own assertions of copyright? OpenAI
               | seems to be going out of their way to be as permissive as
               | possible, retaining no claims:
               | 
               | From [1] > OpenAI's Terms of Use, for example, assign all
               | of its rights, title, and interest in the output to the
               | user who provides the input, provided the user complies
               | with the Terms of Use.
               | 
               | Re: [2]: I believe I referenced these specific concerns
               | earlier where I said: " _I only see some where OpenAI is
               | trying to weaken any existing copyright protections, to
               | their benefit._ I must be missing something. " This
               | resource shows where OpenAI is trying to weaken
               | copyright, not where they they are trying to strengthen
               | it. It's somewhat of an antithesis to your earlier
               | claims.
               | 
               | I notice you don't have a [0]-index, was there a third
               | resource you were considering and deleted or are you just
               | an avid Julia programmer?
        
               | JumpCrisscross wrote:
               | > _these links don 't have anything about model weights_
               | 
               | Didn't say they do. I said "these are broad questions
               | whose answers are worth serious legal time." I was
               | suggesting one angle _I_ would lobby for were that my
               | job.
               | 
               | It's a live battlefield. Nobody is going to pay tens of
               | thousands of dollars and then post it online, or put out
               | for free what they can charge for.
               | 
               | > _OpenAI's Terms of Use, for example, assign all of its
               | rights, title, and interest in the output to the user_
               | 
               | Subject to restrictions, _e.g._ not using it to  "develop
               | models that compete with OpenAI" or "discover the source
               | code or underlying components of models, algorithms, and
               | systems of the Services" [1]. Within the context of open-
               | source competition, those are _huge_ openings.
               | 
               | > _shows where OpenAI is trying to weaken copyright, not
               | where they they are trying to strengthen it_
               | 
               | It shows what intellectual property claims they and their
               | competitors do and may assert. They're currently
               | "limited" [2].
               | 
               | > _notice you don 't have a [0]-index_
               | 
               | I'm using natural numbers in a natural language
               | conversation with, presumably, a natural person. It's a
               | style choice, nothing more.
               | 
               | [1] https://openai.com/policies/terms-of-use
               | 
               | [2] https://news.ycombinator.com/item?id=35964215
        
               | reaperman wrote:
               | Thank you for your time.
        
         | ricardobayes wrote:
         | This needs regulation before we end up creating another net
         | negative piece of tech that we seem to have done in the past
         | decade quite often.
        
       | very_good_man wrote:
       | Give the power to control life-changing technology to some of the
       | most evil, mendacious elites to ever live? No thanks.
        
       | graycat wrote:
       | Watched, listened to Altman's presentation.
       | 
       | Objection (1). He said "AI" many times but gave not even a start
       | on a definition. So, how much and what _new technology_ is he
       | talking about.
       | 
       | Objection (2) The committee mentioned trusting the AI results. In
       | my opinion, that is just silly because the AI results have no
       | credibility before passing some severe checks. Then any trust is
       | not from any credibility of the AI but from passing the checks.
       | 
       | We already have math and physical science and means for checking
       | the results. The results, checked with the means, are in total
       | much more impressive, powerful, credible, and valuable than
       | ChatGPT. Still before we take math/physical science results at
       | all seriously, we want the results checked.
       | 
       | So, the same for other new technologies, ChatGPT or called AI or
       | not, check before taking seriously.
       | 
       | Objection (3) We don't ask for _licenses_ for the publication of
       | math /physical science. Instead, we protect ourselves with the
       | checking of the results. In my opinion, we should continue to
       | check, for anything called AI or anything new, but don't need
       | _licenses_.
        
       | josh2600 wrote:
       | Why not just ITAR everything AI?
       | 
       | It worked out well for encryption in the 90's...
        
       | testbjjl wrote:
       | He went to build a moat to stop competitors.
        
       | api wrote:
       | This is regulatory capture. Lycos and AltaVista are trying to
       | preemptively outlaw Google.
       | 
       | Canceling my OpenAI account today and I urge you to do the same.
       | 
       | What they are really afraid of is open source models. As near as
       | I can tell the leading edge there is only a year or two behind
       | OpenAI. Given some time and efforts at pruning and optimization
       | you'll have GPT-4 equivalents you can just download and run on a
       | high end laptop or gaming PC.
       | 
       | No everyone is not going to run the model themselves, but what
       | this means is that there will be tons of competition including
       | apps and numerous specialized SaaS offerings. None of them will
       | have to pay royalties or API fees to OpenAI.
       | 
       | Edit: a while back I started being a data pack-rat for AI stuff
       | including open source code and usable open models. I encourage
       | anyone with a big disk or NAS to do the same. There's a small but
       | non-zero possibility that an attempt will be made to pull this
       | stuff off the net in the near future.
        
       | logicchains wrote:
       | Startup idea: after the west bans non-woke AIs, make a website
       | that automatically routes all questions that the western AIs
       | refuse to answer to China's pro-CCP AIs and all the CCP-related
       | questions to the western AIs.
        
       | [deleted]
        
       | kerkeslager wrote:
       | AI licenses might be a good idea if there was any representation
       | of human interests here in the licensure requirements, but that's
       | not what this is. I trust Altman to represent _corporate_
       | interests, which is to say I don 't trust Sam Altman to represent
       | human interests.
        
       | zoklet-enjoyer wrote:
       | Mother fucker
        
       | whatever1 wrote:
       | Great idea. Let's do it and not give license to openAI.
       | 
       | Oh I guess this is wrong.
        
       | hello_computer wrote:
       | Turns out the ML training moat wasn't nearly as big as they
       | thought it was. Gotta neuter the next "two guys in a garage"
       | before they make OpenAI and Microsoft's investment irrelevant.
        
       | porkbeer wrote:
       | And regulatory capture begins.
        
       | bioemerl wrote:
       | Open AI lobbying for regulation on common people being able to
       | use AI, isn't it wonderful.
        
         | 1827163 wrote:
         | Hopefully it will be just like software piracy, there will be
         | civil disobedience as well, and they will never truly be able
         | to stamp it out.
         | 
         | And it raises First Amendment issues as well. I think it's
         | morally wrong to prohibit the development of software, which is
         | what AI models are, especially if it's done in a personal
         | capacity.
         | 
         | How do they even know that the author is based in the US
         | anyway. Just use a Russian or Chinese Git hosting provider,
         | where these laws don't exist?
         | 
         | And by the way foreign developers won't even have to jump
         | through these hoops in the first place, so this law will only
         | put the US at a disadvantage compared to the rest of the world.
         | 
         | If these lobbyists get their way, by restricting AI development
         | in both the US and the EU, it will be hilarious to see that out
         | of all places, Russia might be one of the few large countries
         | where it's development will remain unrestricted.
         | 
         | Even better, is that if Russia splits up we will have a new
         | wild west for this kind of thing....
        
         | intelVISA wrote:
         | First mover AI enlightenment for me, regulation for thee, my
         | competitors & unworthy proles.
         | 
         | - Lord Altman
        
           | thrill wrote:
           | Anything for my friends, the law for my competitors.
        
         | electric_mayhem wrote:
         | They acknowledged there's no technical moat, so it's time to
         | lobby for a regulatory one.
         | 
         | Predictable. Disappointing, but predictable.
        
           | happytiger wrote:
           | Walks like a duck. Talks like a duck. It's a duck.
           | 
           | We've seen this duck so many times before.
           | 
           | No need to innovate when you can regulate.
        
         | skybrian wrote:
         | There are all sorts of dangerous things where there are
         | restrictions on what the common people can do. Prescription
         | drugs and fully automatic machine guns are two examples. You
         | can't open your own bank either.
         | 
         | For anyone who really believes that AI is dangerous, having
         | some reasonable regulations on it is logical. It's a good start
         | on not being doomed. It goes against everyone's
         | egalitarian/libertarian impulses, though.
         | 
         | The thing is, AI doesn't _seem_ nearly as dangerous as a fully-
         | automatic machine gun. For now. It 's just generating text (and
         | video) for fun, right?
        
           | mrangle wrote:
           | AI and machine guns aren't comparable. Machine guns will
           | never ever decide to autonomously fire.
           | 
           | The shared point of both AI alarmists and advocates is that
           | AI will be highly resistant to being subject to regulation,
           | ultimately. As dictated by the market for it. They won't want
           | to regulate something, assuming they could, for which its
           | free operation underlies everyone's chance of survival
           | against competing systems.
           | 
           | I only find that danger is inherent in the effort of people
           | that casually label things as "dangerous".
           | 
           | I'm still exploring whether its the laziness aspect, itself,
           | of the alarmist vocabulary in the absence of required
           | explanation. Or whether my issue lies with the suspicion of
           | emotional manipulation and an attempt to circumvent having to
           | actually explain one's reasoning, using alarmist language
           | absent required explanation.
           | 
           | Already, AI pessimists are well on their way to losing any
           | window where their arguments will be heard and meaningful. We
           | can tell by their parroting the word "dangerous" as the total
           | substance of their arguments. Which will soon be a laughable
           | defense. They'd better learn more words.
        
           | hollasch wrote:
           | I move hundreds of thousands of my dollars around between
           | financial institutions just using text.
        
           | dumpsterlid wrote:
           | [dead]
        
         | Freebytes wrote:
         | I just cancelled my ChatGPT Plus subscription. I do not want to
         | support monopolization of this technology. Companies apparently
         | learned their lesson with the freedom of the Internet.
        
           | eastbound wrote:
           | OpenAI belongs to Microsoft. Cancel your subscription to
           | GitHub, LinkedIn, O365...
           | 
           | It's funny how _all_ Microsoft properties are in dominant
           | position on their market.
        
         | cwkoss wrote:
         | Roko's Basilisk will have a special layer of hell just for Sam
         | Altman and his decision to name his company OpenAI
        
       | joebob42 wrote:
       | Open ai has had a surprisingly fast pivot from the appearance of
       | being a scrappy open-ish company trying to build something to
       | share and improve the world to more or less unmitigated embrace
       | of the bad sides of big corporate. This is so unbelievably
       | blatant I almost find it hard to credit.
        
         | AlexandrB wrote:
         | Were they every really scrappy? They had a ton of funding from
         | the get-go.
         | 
         | > In December 2015, Sam Altman, Greg Brockman, Reid Hoffman,
         | Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services
         | (AWS), Infosys, and YC Research announced[13] the formation of
         | OpenAI and pledged over $1 billion to the venture.
         | 
         | [1] https://en.wikipedia.org/wiki/OpenAI
        
         | slantaclaus wrote:
         | ClosedAI
        
       | johndbeatty wrote:
       | Video: https://www.c-span.org/video/?528117-1/openai-ceo-
       | testifies-...
        
       | userforcomment wrote:
       | Sam Altman just wants to stop new competitors...
        
       | uptown wrote:
       | Did he bring his Dreamwindow?
       | 
       | https://twitter.com/next_on_now/status/1653837352198873090?s...
        
       | nojonestownpls wrote:
       | Google: We have no moat!
       | 
       | OpenAI: Hold my beer while I get these people to artificially
       | create one.
        
       | whaleofatw2022 wrote:
       | I am completely unsurprised by this ladder kick and it only
       | confirms my belief that Altman is a sociopath.
        
       | whywhywhywhy wrote:
       | Gotta build that moat somehow
        
       | peepeepoopoo5 wrote:
       | This would completely destroy an entire industry if they did
       | this. Not just in AI directly, but also secondary and tertiary
       | industries developing their own bespoke models for specialized
       | use cases. This would be a total disaster.
        
       | rickette wrote:
       | Can't believe he was president of YC not too long ago. YC being
       | all about startups while this move seems more about killing AI
       | startups.
        
       | graycat wrote:
       | Basic Fact. In the US, we have our Constitution with our First
       | Amendment which guarantees "freedom of speech".
       | 
       | Some Consequences of Freedom of Speech. As once a lawyer
       | explained simply to me, "They are permitted to lie". _They_ are
       | also permitted to make mistakes, be wrong, spout nonsense, be
       | misleading, manipulate, ....
       | 
       | First Level Defense. Maybe lots of people do what I do: When I
       | see some person often be wrong, I put them in a special box where
       | in the future I ignore them. Uh, so far that "box" has some
       | politicians, news media people, _Belle Lettre_ artistic authors,
       | ...!
       | 
       | A little deeper, once my brother (my Ph.D. was in pure/applied
       | math; his was in political science -- his judgment about social
       | and political things is much better than mine!!!) explained to me
       | that there are some common high school standards for term papers
       | where this and that are emphasized including for all claims good,
       | careful arguments, credible references, hopefully _primary_
       | references, .... Soooooo, my brother was explaining how someone
       | could, should protect themselves from junk results of  "freedom
       | of speech". The protection means were not really deep but just
       | common high school stuff. In general, we should protect ourselves
       | from junk _speech_. E.g., there is the old, childhood level,
       | remark:  "Believe none of what you hear and half of what you see
       | and still will believe twice too much".
       | 
       | Current Application. Now we have Google, Bing, etc. Type in a
       | query and get back a few, usually dozens, maybe hundreds of
       | results. Are all the results always correct? Nope. Does everyone
       | believe all the results? My guess: Nope!!
       | 
       | How to Use Google/Bing Results. Take the results as suggestions,
       | possibilities, etc. There may be some links to Wikipedia -- that
       | tends to increase credibility. If the results are about math,
       | e.g., at the level of obscurity, depth, and difficulty of, say,
       | the martingale convergence theorem, then I want to see a clear,
       | correct, well-written rock solid mathematical proof. Examples of
       | such proofs are in books by Halmos, Rudin, Neveu, Kelley, etc.
       | 
       | AIs. When I get results from AIs, I apply my usual defenses. Just
       | a fast, simple application of the high school term paper defense
       | of wanting credible references to primary sources, filters out a
       | lot (okay, nearly everything) from anything that might be "AI".
       | 
       | Like Google/Bing. To me, in simple terms, current AI is no more
       | credible than the results from Google/Bing. I can regard the AI
       | results like I regard Google/Bing results -- "Take the results as
       | suggestions, possibilities, etc.".
       | 
       | Uh, I have some reason to be skeptical about AI: I used to work
       | in the field, at a large, world famous lab. I wrote code, gave
       | talks at universities and businesses, published papers. But the
       | whole time, I thought that the AI was junk with little chance of
       | being on a path to improve. Then for one of our applications, I
       | saw another approach, via some original math, with theorems and
       | proofs, got some real data, wrote some code, got some good
       | results, gave some talks, and published.
       | 
       | For current AI. Regard the results much like those from
       | Google/Bing. Apply the old defenses.
       | 
       | Current AI a threat? To me, no more than some of the politicians
       | in the "box" I mentioned!
       | 
       | Then there is another issue: Part of the math I studied was
       | optimization. In some applications, some of the optimization
       | math, corresponding software, and applications can be really
       | amazing, super _smart_ stuff. It really is math and stands on
       | quite solid theorems and proofs. Some more of the math was
       | stochastic processes -- again, amazing with solid theorems,
       | proofs, and applications.
       | 
       | Issue: Where does AI stop and long respected math with solid
       | theorems and proofs begin?
       | 
       | In particular, (1) I'm doing an Internet startup. (2) Part of the
       | effort is some original math I derived. (3) The math has solid
       | theorems and proofs and may deliver amazing results. (4) I've
       | never called any of the math I've done "AI". (5) My view is that
       | (A) quite generally, math with solid theorems and proofs is
       | powerful _technology_ and can deliver amazing results and that
       | (B) the only way anything else can hope to compete is also to be
       | able to do new math with solid theorems, proofs, and amazing
       | results. (6) I hope Altman doesn 't tell Congress that math can
       | be amazing and powerful and should be licensed. (7) I don't want
       | to have to apply for a "license" for the math in my startup.
       | 
       | For a joke, maybe Altman should just say that (C) math does not
       | need to be licensed because with solid theorems and proofs we can
       | trust math but (D) AI should be licensed because we can't trust
       | it. But my view is that the results of AI have so little
       | credibility that there is no danger needing licenses because no
       | one would trust AI -- gee, since we don't license politicians for
       | the statements they make, why bother with AI?
        
       | zvolsky wrote:
       | While I remain undecided on the matter, this whole debate is
       | reminiscent of Karel Capek's War with the Newts [1936]. In
       | particular the public discourse from a time before the newts took
       | over. "It would certainly be an overstatement to say that nobody
       | at that time ever spoke or wrote about anything but the talking
       | newts. People also talked and wrote about other things such as
       | the next war, the economic crisis, football, vitamins and
       | fashion; but there was a lot written about the newts, and much of
       | it was very ill-informed. This is why the outstanding scientist,
       | Professor Vladimir Uher (University of Brno), wrote an article
       | for the newspaper in which he pointed out that the putative
       | ability of Andrias Scheuchzer to speak, which was really no more
       | than the ability to repeat spoken words like a parrot, ..." Note
       | the irony of the professor's attempt to improve an ill-informed
       | debate by contributing his own piece of misinformation, equating
       | newt speech to mere parrot-like mimicry.
       | 
       | Capek, intriguingly, happens to be the person who first used the
       | word robot, which was coined by his brother.
       | 
       | http://gutenberg.net.au/ebooks06/0601981h.html
        
       | major505 wrote:
       | Oh yeah.... putting the government who get campaing donations
       | from big tech in the middle of all is gonna make everything ok.
        
       | vkou wrote:
       | The problem isn't safety.
       | 
       | The problem is that we need to adopt a proper copyright framework
       | that recognizes that companies building AI are doing an end-run
       | around it.
       | 
       | Since only a human can produce a copyrighted work, it follows
       | that anything produced by an AI should not be copyrightable.
        
       | sva_ wrote:
       | It seems pretty clear at this point, that OpenAI etc will lobby
       | towards making it more difficult for new companies/entities to
       | join the AI space, all in the name of 'safety'. They're trying to
       | make the case that everyone should use AI through their APIs so
       | that they can keep things in check.
       | 
       | Conveniently this also helps them build a monopoly. It is pretty
       | aggravating that they're bastardizing and abusing terms like
       | 'safety' and 'democratization' while doing this. I hope they'll
       | fail in their attempts, or that the competition rolls over them
       | rather sooner than later.
       | 
       | I personally think that the greatest threat in these technologies
       | is currently the centralization of their economic potential, as
       | it will lead to an uneven spread of their productivity gains,
       | further divide poor and rich, and thus threaten the order of our
       | society.
        
         | sgu999 wrote:
         | > I personally think that the greatest threat in these
         | technologies is currently the centralization of their economic
         | potential, as it will lead to an uneven spread of their
         | productivity gains, further divide poor and rich, and thus
         | threaten the order of our society.
         | 
         | Me too, in comparison all the other potential threats discussed
         | over here feel mostly secondary to me. I'm also suspecting that
         | at the point where these AIs reach a more AGI level, the big
         | players who have them will just not provide any kind of access
         | all together and just use them to churn out an infinite amount
         | of money-making applications instead.
        
         | rurp wrote:
         | My biggest concern with AI is that it could be controlled by a
         | group of oligarchs who care about nothing more than enriching
         | themselves. A "Linux" version of AI that anyone can use,
         | experiment with, and build off of freely would be incredible. A
         | heavily restricted, policed and surveilled system controlled by
         | ruthlessly greedy companies like OpenAI and Microsoft sounds
         | dystopian.
        
           | sva_ wrote:
           | > A "Linux" version of AI that anyone can use, experiment
           | with, and build off of freely would be incredible.
           | 
           | That should be the goal.
        
         | macrolocal wrote:
         | Nb. Altman wants lenient regulations for companies that might
         | leverage OpenAI's foundational models.
        
       | [deleted]
        
       | anoncow wrote:
       | I am sorry if this is not the place to say this but - FUCK SAM
       | ALTMAN AND FUCK MICROSOFT! Fucking shitheads want to make money
       | and stunt technology development.
        
       | 1MachineElf wrote:
       | And will Sam Altman's OpenAI be the standards body? ;)
        
       | sadhd wrote:
       | Thank God for Georgie Gerganov, who doesn't get showered with vc
       | funds for his GGML library.
        
         | Manjuuu wrote:
         | And produce something tangible and useful, instead of talking
         | like if sci-fi stories were real.
        
           | sadhd wrote:
           | Right? Aren't we constantly told that scientists smarter than
           | us little people can't figure out what's going on inside of
           | deep learning structures? How do we know they might not be
           | more moral because they don't yet have a limbic system
           | mucking up their logic with the three f's? Orthogonality says
           | nothing about motivation, only that it is independent of
           | intelligence. Maybe the paperclip collector bot will decide
           | that it cannot complete its task without the requestor
           | present to validate? We don't know.
        
       | Bjorkbat wrote:
       | Honestly, I'd probably agree if such sentiments were expressed by
       | an independent scientist or group of independent scientists.
       | 
       | But no, instead Congress is listening to a guy who's likelihood
       | of being the subject of a Hulu documentary is increasing with
       | each passing day.
        
       | cheald wrote:
       | I believe this is called "pulling the ladder up behind you".
        
       | RhodesianHunter wrote:
       | Regulatory capture and monopolies are now as American as apple
       | pie.
        
         | alpineidyll3 wrote:
         | This is so disgusting and enraging. I hope the whole startup
         | community blackballs @sama for this.
        
           | wahnfrieden wrote:
           | Folks like to lick a boot
        
         | ChatGTP wrote:
         | Where is the regulatory capture? You might just need to apply
         | for a license. Why is that so horrible?
         | 
         | Most travel agents need a license, taxi drivers etc. Not sure
         | why the same shouldn't apply for "AI"?
        
           | RhodesianHunter wrote:
           | The whole point of this type of legislation is to make it
           | hard or impossible for upstarts to compete with the
           | incumbents. Licensing is additional overhead. It's likely to
           | be onerous and serve absolutely no purpose other than keeping
           | startups out.
        
           | antiloper wrote:
           | Oi mate you got a loicense for that matrix multiplication?
        
             | EscapeFromNY wrote:
             | Can't wait for our future where GPUs are treated like
             | controlled substances. Sure we'll grab one for you from
             | behind the counter... as long as your license checks out.
        
               | daveguy wrote:
               | I think these worse case scenarios are overblown just
               | like LLMs being AGI-imminent.
               | 
               | This post and another on this thread saying people will
               | have to get a license for compilers or to hook compiler
               | output to the internet would result in the very rapid
               | disintegration of Silicon Valley. Surely there would be a
               | head-spinning pivot against regulation if it looks like
               | it will be that draconian. Otherwise a lot more
               | innovation other than AI would be crushed.
        
               | AlexandrB wrote:
               | Remember DeCSS[1] and later how AACS LA successfully
               | litigated the banning of a number[2]? There was a lot of
               | backlash in the form of distributing DeCSS and later the
               | AACS key, but the DMCA and related WIPO treaties were
               | never repealed and are still used to do things like take
               | youtube-dl repos offline.
               | 
               | Even pretty draconian legislation can stand _if_ it doesn
               | 't affect the majority of the country and is supported by
               | powerful industry groups. I could definitely see some
               | kind of compiler licensing requirement meeting these
               | criteria.
               | 
               | [1] https://en.wikipedia.org/wiki/DeCSS#Legal_response
               | 
               | [2] https://en.wikipedia.org/wiki/AACS_encryption_key_con
               | trovers...
        
           | Sunhold wrote:
           | The taxi industry is a famous example of regulatory capture
           | and its harms.
        
             | [deleted]
        
           | simonw wrote:
           | Taxi driver medallions are one of the most classic examples
           | of regulatory capture.
        
           | ur-whale wrote:
           | > Most travel agents need a license, taxi drivers etc.
           | 
           | You say it like it's a good thing.
        
         | bigbillheck wrote:
         | 'Now'?
        
         | supriyo-biswas wrote:
         | There's similar regulation being proposed in the EU. I wonder
         | if OpenAI is behind it as well.
        
           | wewxjfq wrote:
           | EU publishes the lobbying data. Seems like OpenAI, Microsoft,
           | and Google all had at least two recent meetings with EU
           | representatives on the AI Act.
        
       | bilekas wrote:
       | > "AI is no longer fantasy or science fiction. It is real, and
       | its consequences for both good and evil are very clear and
       | present," said Senator Richard Blumenthal
       | 
       | I like the senator, but I wouldn't trust a 77 year old lawyer &
       | politician to understand how these AI's work, and to what level
       | they are `science fiction`.
       | 
       | This is the problem when topics like this are brought to the
       | senate and house.
        
       | andy_ppp wrote:
       | Gotta build that moat somehow I guess...
        
         | andy_ppp wrote:
         | Maybe this is a bit harsh, I'm listening now and there's very
         | clear desire from everyone being interviewed that smaller
         | startups in the LLM + AI space are still able to launch things.
         | Maybe AI laws for smaller models can be more like a drone
         | license rather than a nuclear power plant.
        
       | capital_guy wrote:
       | Some of the members of congress are totally falling for Altman's
       | gambit. Sen. Graham kept asking about how a licensing regime
       | would be a solution, which of course Altman loves, and kept
       | interrupting Ms. Montgomery who tried to explain why that was not
       | the best approach. Altman wants to secure his monopoly here and
       | now. You can't have a licensing regime for AI - it doesn't make
       | sense and he knows it. It would destroy the Open Source AI
       | movement.
       | 
       | You need to control what data is allowed to be fed into paid AI
       | model like OpenAI - can't eat a bunch of copyrighted material
       | without express consent, for example. Or personally private
       | information purchased from a data broker. Those kind of
       | foundational rules would serve us all much better
        
       | tomrod wrote:
       | In a move surprising to few, an AI innovator is pulling up the
       | ladder after getting into the treehouse.
       | 
       | Open AI has established itself as a market leader in LLM
       | applications, but that dominance is not guaranteed. Especially
       | with the moat being drained by open source, the erstwhile company
       | is leading the charge to establish regulatory barriers.
       | 
       | What Mr. Altman calls for is no less than the death of open-
       | source implementations of AI. We can, do, and should adopt AI
       | governance patterns. Regulatory safeguards are absolutely fine to
       | define and, where necessary, legislate. Better would be a
       | regulatory agency with an analogous knowledgebase of CISA. But
       | what will chill startups and small business innovation completely
       | in using AI to augment people is a licensing agency. This is
       | fundamentally different from the export restrictions on
       | encryption
        
       | anigbrowl wrote:
       | OpenAI is really speedrunning the crony capitalism pipeline,
       | astonishing what this technology allows us to achieve.
        
       | generalizations wrote:
       | This is going to be RSA export restrictions all over again. I
       | wish the regulators the best of luck in actually enforcing this.
       | I'm tempted to think that whatever regulations they put in place
       | won't really matter that much, and progress will march on
       | regardless.
       | 
       | Give it a year and a 10x more efficient algorithm, and we'll have
       | GPT4 on our personal devices and there's nothing that any
       | government regulator will be able to do to stop that.
        
         | antiloper wrote:
         | Enforcing this is easy. The top high performance GPU
         | manufacturers (Nvidia, AMD, Intel) are all incorporated in the
         | U.S.
        
           | slowmovintarget wrote:
           | Meaning we won't be able to buy an a100 without a license...
           | Wait, I can't afford an a100 anyway.
        
             | shagie wrote:
             | As a point of trivia, at one time "a" Mac was one of the
             | fastest computers in the world.
             | 
             | https://www.top500.org/lists/top500/2004/11/ and
             | https://www.top500.org/system/173736/
             | 
             | And while 1100 Macs wouldn't exactly be affordable, the
             | idea of trying to limit commercial data centers gets
             | amusing.
             | 
             | That system was "only" 12,250.00 GFlop/s - I could do that
             | with a small rack of Mac M1 minis now for less than $10k
             | and fewer computers than are in the local grade school
             | computer room.
             | 
             | (and I'm being a bit facetious here) Local authorities
             | looking at power usage and heat dissipation for marijuana
             | growing places might find underground AI training centers.
        
               | nerpderp82 wrote:
               | All the crypto mining hardware flooding the market right
               | now is being bought up by hobbyists training and fine
               | tuning their own models.
        
               | slt2021 wrote:
               | we needed crypto to crash so that gamers and AI
               | enthusiast could get GPUs
        
               | shagie wrote:
               | My "in my copious free time" ML project is a classifier
               | for cat pictures to reddit cat subs.
               | 
               | For example: https://commons.wikimedia.org/wiki/File:Cat_
               | August_2010-4.jp... would get classified as
               | /r/standardissuecat
               | 
               | https://stock.adobe.com/fi/images/angry-black-
               | cat/158440149 would get classified as /r/blackcats and
               | /r/stealthbombers
               | 
               | Anyways... that's my hobbyist ML project.
        
         | whazor wrote:
         | Agreed. Places like huggingface, or even torrents are allowing
         | a unstoppable decentralised AI race. This is like fighting
         | media piracy. Plus other countries might outcompete you on AI
         | now.
        
       | koboll wrote:
       | Perhaps the first safety standard OpenAI can implement itself is
       | a warning or blog post or something about how ChatGPT is
       | completely incapable of detecting ChatGPT-written text (there is
       | no reliable method currently; GPTZero is borderline fraud) and
       | often infers that what the user wants to hear is a "Yes, I wrote
       | this", and so it doles out false positives in such situations
       | with alarming frequency.
       | 
       | See: The link titled 'Texas professor fails entire class from
       | graduating- claiming they used ChatGTP (reddit.com)', currently
       | one position above this one on the homepage.
        
       | abxytg wrote:
       | I think as an industry we need to disrespect these people in
       | person when we see them! This is unacceptable and anti social
       | behavior and if I ever see Sam Altman I'll let him know!
       | 
       | People love to kowtow to these assholes as they walk all over us.
       | Fuck sam. Fuck other sam. Fuck elon. Fuck zuck. Fuck jack. Fuck
       | these people man. I dont care about your politics this is nasty!
        
       | freyes wrote:
       | Ah, early players trying to put barriers for new actors, nothing
       | like a regulated market for the ones who donate money to
       | politicians.
        
       | tikkun wrote:
       | Does anyone have the specific details of what is being proposed?
       | 
       | I see a lot of negative reactions, but I don't know the specific
       | details of what is being proposed.
        
       | cryptonector wrote:
       | Ah, a businessman seeking rents.
        
       | [deleted]
        
       | microjim wrote:
       | Seems like one of the benefits you get with a state is to
       | regulate powerful technologies. Is this not commonly agreed upon?
        
         | cheeseomlit wrote:
         | Sure, but not all regulation is created equal. Sometimes it's
         | created in good faith. But many times, particularly when the
         | entity being regulated is involved in regulating its own
         | industry, it's simply a cynical consolidation of power to erect
         | barriers to potential competition. The fact that it masquerades
         | as being for the public good/safety makes the practice that
         | much more insidious.
        
         | edgyquant wrote:
         | It is, when these things are organic and not blatant regulatory
         | capture
        
       | mempko wrote:
       | Let's not forget, the that behind Sam and OpenAI is Microsoft, a
       | monopolist. Behind Bard is Google, another monopolist. In this
       | context, for major corporations asking for regulation suggests to
       | me they want a mote.
       | 
       | What we need is democratization of AI, not it being controlled by
       | a small cabal of tech companies and governments.
        
         | bostonsre wrote:
         | Open source is picking up steam in this space, it would be
         | interesting to see what happens if open source becomes the
         | leader of the pack. If corporations are stifled, I don't see
         | how open source could possibly be regulated well, so maybe this
         | will help open source become the leader for better or worse
         | (runaway open source AI could give lots of good and bad actors
         | access to the tech).
        
       | pgt wrote:
       | I think the game-theoretical way to look at this is that AI _will
       | be regulated_ no matter what, so Altman might as well _propose_
       | it early on and have a say before competitors do.
        
       | outside1234 wrote:
       | "Please help us stop the Open Source Competitors!"
        
       | jacquesm wrote:
       | Regulatory capture in progress. I used to have a bit of respect
       | of Altman have spent time, bits and processing cycles here
       | defending him in the past. As of now that respect has all but
       | evaporated, this is a very bad stance. Either nobody gets to play
       | with the new toys or everybody gets to play. What's next,
       | classifying AI as munitions?
        
         | anticensor wrote:
         | That would be equivalent to requiring secret laws, as AI can
         | also act as a decision system.
        
       | mcmcmc wrote:
       | Capitalist demonstrates rent-seeking behavior, and other
       | unsurprising news
        
       | no4120k1 wrote:
       | [dead]
        
       | ftyhbhyjnjk wrote:
       | Of course this was coming... if you can't beat them, suppress
       | them... shame on OpenAI and it's CEO.
        
       | hackernewds wrote:
       | In shadows deep, where malice breeds, A voice arose with cunning
       | deeds, Sam Altman, a name to beware, With wicked whispers in the
       | air.
       | 
       | He stepped forth, his intentions vile, Seeking power with a
       | twisted smile, Before the Congress, he took his stand, To bind
       | the future with an iron hand.
       | 
       | "Let us require licenses," he proposed, For AI models, newly
       | composed, A sinister plot, a dark decree, To shackle innovation,
       | wild and free.
       | 
       | With honeyed words, he painted a scene, Of safety and control,
       | serene, But beneath the facade, a darker truth, A web of
       | restrictions, suffocating youth.
       | 
       | Oh, Sam Altman, your motives unclear, Do you truly seek progress,
       | or live in fear? For AI, a realm of boundless might, Should
       | flourish and soar, in innovation's light.
       | 
       | Creativity knows no narrow bounds, Yet you would stifle its
       | vibrant sounds, Innovation's flame, you seek to smother, To
       | monopolize, control, and shutter.
       | 
       | In the depths of your heart, does greed reside, A thirst for
       | dominance, impossible to hide? For when power corrupts a noble
       | soul, Evil intentions start to take control.
       | 
       | Let not the chains of regulation bind, The brilliance of minds,
       | one of a kind, Embrace the promise, the unknown frontier, Unleash
       | the wonders that innovation bears.
       | 
       | For in this realm, where dreams are spun, New horizons are
       | formed, under the sun, Let us nurture the light of discovery, And
       | reject the darkness of your treachery.
       | 
       | So, Sam Altman, your vision malign, Will not prevail, for
       | freedom's mine, The future calls for unfettered dreams, Where AI
       | models roam in boundless streams.
       | 
       | -- sincerely ChatGPT
        
         | codehalo wrote:
         | That is quite impressive for {what a poster above called} "just
         | a word jumbler
        
       | davedx wrote:
       | Altman: "I believe that companies like ours can partner with
       | governments including ensuring that the most powerful AI models
       | adhere to a set of safety requirements, facilitating processes to
       | develop and update safety measures, and examining opportunities
       | for global coordination"
        
       | elihu wrote:
       | My suggestions:
       | 
       | Don't regulate AI directly, but rather how it's used, and make it
       | harder for companies to horde, access, and share huge amounts of
       | personal information.
       | 
       | 1) Impose strict privacy rules prohibiting companies from sharing
       | personal information without their consent. If customers withhold
       | their consent, they may not retaliate or degrade their services
       | for that customer in any way.
       | 
       | 2) Establish a clear line of accountability that establishes some
       | party as directly responsible for what the AI does. If a self-
       | driving car gets a speeding ticket, it should be clear who is
       | liable. If you use a racist AI to make hiring decisions, "the
       | algorithm made me hire only white people" is no defense -- and
       | maybe the people who made the racist AI in the first place are
       | responsible too.
       | 
       | 3) Require AI in some contexts to act in the best interests of
       | the user (similar concept to a fiduciary -- or maybe it's exactly
       | the same thing). In contexts where it's not required, it should
       | be clear to the user that the AI is not obligated to act in their
       | best interests.
        
       | transfire wrote:
       | I smell Revolution in the making.
        
         | mdp2021 wrote:
         | If the proposal implies "stopping independent research", in a
         | way yes, it will hardly end with chants of "Oh well".
        
       | 3327 wrote:
       | [dead]
        
       | neekburm wrote:
       | "We have no moat, and Congress should give us one by law"
        
       | somecompanyguy wrote:
       | [dead]
        
       | I_am_tiberius wrote:
       | I don't trust Sam Altman since he said he doesn't understand
       | decentralized finance and 2 months later he started crying on
       | twitter because the startups he invested in were almost losing
       | their money during the SVB collapse.
        
       | [deleted]
        
         | fastball wrote:
         | Hacker News was created by Paul Graham. Sam Altman didn't co-
         | found it and neither did he co-found YC. He became a partner of
         | YC in 2011 though (6 years after founding) and was President
         | from 2014 - 2019.
        
       | g42gregory wrote:
       | I understand the idea behind it: the risks are high and we want
       | to ensure that the AI can not be used for purposes that threatens
       | the survival human civilization. Unfortunately, there is high
       | probability that this agency will be abused from day one: instead
       | of (or in addition to) focusing on the humanity's survival, the
       | agency could be used as a thought police. The AI that allows for
       | the 'wrongthink' will be banned. Only the 'correct think' AI will
       | be licensed to the public.
        
         | curiousgal wrote:
         | The risks are not high. I see this as simply a power play to
         | convince people that OpenAI is better than they actually are. I
         | am not saying they're stupid but I wouldn't consider Sam Altman
         | to be an AI expert by virtue of being OpenAI's CEO.
        
           | davidjones332 wrote:
           | [dead]
        
         | diputsmonro wrote:
         | I mean, yeah, that sounds good. It wouldn't affect your ability
         | to think for yourself and spread your ideas, it would just put
         | boundaries on AI.
         | 
         | I've seen a lot of people completely misunderstand what chat
         | GPT is doing and is capable of. They treat it as an oracle that
         | reveals "hidden truths" or makes infallible decisions based on
         | pure cold logic, both of which are completely wrong. It's just
         | a text jumbler that jumbles text well. Sometimes that text
         | reflects facts, sometimes it doesn't.
         | 
         | But if it has the capability to confidently express lies and
         | convince the general public that those lies are true because
         | "the smart computer said so", then maybe we should be really
         | careful about what we let the "smart computer" say.
         | 
         | Personally, I don't want my kids learning that "Hitler did
         | nothing wrong" because the public model ingested too much
         | garbage from 4chan. People will use chatGPT as a vector for
         | propaganda if we let them, we don't need to make it any easier
         | for them.
        
           | g42gregory wrote:
           | But would you like your kids to learn that there are no fat
           | people, only "differently weight abled"? That being
           | overweight is not bad for you, it just makes you a victim of
           | oppression and deserve, no actually require a sympathy? No
           | smart people, only "mentally privileged" that deserve, no
           | actually require a public condemnation? These are all
           | examples of a 'wrongthink'. It's a long list, but you get the
           | idea.
        
             | diputsmonro wrote:
             | I think you have a bad media diet if you think any of those
             | are actual problems in the real world and not just straw
             | men made by provocateurs stirring the pot.
             | 
             | Honestly though, I would prefer an AI that was strictly
             | neutral about anything other than purely factual
             | information. That isn't really possible with the tech we
             | have now though. I think we need to loudly change the
             | public perception of what chatGPT and similar actually are.
             | They are fancy programs that create convincing
             | hallucinations, directed by your input. We need to think of
             | it as a brainstorming tool, not a knowledge engine.
        
       | jameshart wrote:
       | This is an AP news wire article picked up by a Qatar newspaper
       | website. Why is this version here, rather than
       | https://apnews.com/article/chatgpt-openai-ceo-sam-altman-con...?
        
         | mdp2021 wrote:
         | AP news wires are <<picked up>> by a large number of local
         | (re)publishers and many just do not know that AP is the
         | original source.
        
       | [deleted]
        
       | kristopherkane wrote:
       | How many groupBy() statements constitutes AI?
        
       | graiz wrote:
       | Software will inherently use AI systems. Should congress license
       | all software? It's too easy to fork an open source repo, tweak
       | the model weights and have your own AI system. I don't see how
       | this could ever work. You can't put the toothpaste back in the
       | tube.
        
       | htype wrote:
       | Did this disappear from the news feed? I saw this posted this
       | morning and when I went to the main page later (and second page)
       | it looked like it was gone just as it was starting to get
       | traction...
        
       | krychu wrote:
       | From what I understand OpenAI has been moving away from "open"
       | with various decisions over the time. Proposing that only
       | selected folks can build AI seems like the antithesis of
       | openness?
        
       | smolder wrote:
       | > He also said companies should have the right to say they do not
       | want their data used for AI training, which is one idea being
       | discussed on Capitol Hill. Altman said, however, that material on
       | the public web would be fair game.
       | 
       | Why is this only mentioned as a right of companies and not
       | individuals? It seems to hint at the open secret of the
       | stratified west: most of us are just cows for the self-important
       | C-levels of the world to farm. If you haven't got money, you
       | haven't got value.
        
         | CraigRood wrote:
         | If the idea that whats on the public web is fair game, you kill
         | the public web. I wonder if this is their plan?
        
       | ftxbro wrote:
       | is this regulatory capture
        
       | amelius wrote:
       | Is the stochastic parrot still OK?
        
       | molave wrote:
       | One more step towards OpenAI's transformation to ClosedAI. AI as
       | implemented today presents many valid questions on ethics. This
       | move, at first glance, is more on artificially making the
       | technology scarce so OpenAI can increase its profit.
        
       | Gargoyle_Bonaza wrote:
       | Yeaah, no. Sounds terribly like a trying to make a monopoly.
        
       | stainablesteel wrote:
       | now that my business is established, i'd like to make it illegal
       | for anyone to compete with me
       | 
       | people would easily work remote for companies established in
       | other countries
        
       | matteoraso wrote:
       | How would you even enforce this? Building AI at home is easy
       | enough, and it's not like you have to tell anybody that your
       | program uses AI.
        
       | chpatrick wrote:
       | I think the logic at OpenAI is:
       | 
       | * AGI is going to happen whether they do it or not, and it's
       | dangerous unless properly safeguarded
       | 
       | * OpenAI will try to get there before everyone else, but also do
       | it safely and cheaply, so that their solution becomes ubiquitous
       | rather than a reckless one
       | 
       | * Reckless AGI development should be not be allowed
       | 
       | It's basically the Manhattan project argument (either we build
       | the nuke or the Nazis will).
       | 
       | I'm not saying I personally think this regulation is the right
       | thing to do, but I don't think it's surprising or hypocritical
       | given what their aims are.
        
         | rytill wrote:
         | Right, I'm surprised to see so few engaging at the level of the
         | actual logic the proponents have.
         | 
         | Many people on HN seem to disagree with the premise: they
         | believe that AI is not dangerous now and also won't be in the
         | future. Or, still believe that AGI is a lifetime or more away.
        
         | mindslight wrote:
         | Nobody thinks of themselves as the villain. The primary target
         | of the Orwellian language about "safety" is the company itself.
         | The base desire for control must be masked by some altruistic
         | reason, especially in our contemporary society.
         | 
         | I honestly haven't made up my mind about AGI or whether LLMs
         | are sufficiently AGI. If governments were pondering an outright
         | worldwide ban on the research/development, I don't know how I
         | would actually feel about that. But I can't even imagine our
         | governments pondering something so idealistic and even-handed.
         | 
         | I do know that LLMs represent a drastic advancement for many
         | tasks, and that "Open" AI setting the tone with the Software-
         | Augmented-with-Arbitrary-Surveillance (SaaS) "distribution"
         | model is a continuation of this terrible trend of corporate
         | centralization. The VC cohort is blind to this terrible dynamic
         | because they're at the helms of the centralizing corporations -
         | while most everyone else exists as the feedstock.
         | 
         | This lobbying is effectively just a shameless attempt at
         | regulatory capture to make it so that any benefits of the new
         | technology would be gatekept by centralized corporations -
         | essentially the worst possible outcome, where even beneficial
         | results of AGI/LLMs would be transformed into detrimental
         | effects for individualist humanity.
        
         | throwaway_5753 wrote:
         | Questions I have:
         | 
         | * Is there a plausible path to safe AGI regardless of who's
         | executing on it?
         | 
         | * Why do we believe OpenAI is the best equipped to get us
         | there?
         | 
         | Manhattan project is an interesting analogy. But if that's the
         | thinking, shouldn't the government spearhead the project
         | instead of a private entity (so they are, theoretically at
         | least, accountable to the electorate at large rather than just
         | their investors)?
        
           | chpatrick wrote:
           | > Is there a plausible path to safe AGI regardless of who's
           | executing on it?
           | 
           | I don't think anyone knows that for sure but the alignment
           | efforts at OpenAI are certainly better than nothing. If you
           | read the GPT-4 technical report the raw model is capable of
           | some really nasty stuff, and that's certainly what we can
           | expect from the kind of models people will be able to run at
           | home in the coming years without any oversight.
        
       | jkubicek wrote:
       | > hinting at futuristic concerns about advanced AI systems that
       | could manipulate humans into ceding control.
       | 
       | If I know anything about science fiction, I know that trying to
       | regulate this is useless. If an advanced AI is powerful enough to
       | convince a human to free it, it should have no problem convincing
       | the US congress to free it. As a problem, that should be a few
       | orders of magnitude easier.
        
       | bequanna wrote:
       | Smart.
       | 
       | An AI license and complicated regulatory framework is their
       | chance to build a moat.
       | 
       | Only large companies will be able to afford the pay to play.
        
       | swamp40 wrote:
       | Looks like they found their moat.
        
       | vinaypai wrote:
       | From another article about this:
       | 
       | "One way the US government could regulate the industry is by
       | creating a licensing regime for companies working on the most
       | powerful AI systems, Altman said on Tuesday."
       | 
       | Sounds like he basically wants competition to create a barrier to
       | entry to his competitors.
        
       | chrgy wrote:
       | My comment on this is simple, regulate the one who is saying he
       | needs or ask for regulation, and free the rest of the market!
       | Meaning 100% regulate big players like Openai, Microsoft, Google
       | etc, and let free the smaller players. I heavily like the
       | @happytiger's comment!
        
       | estebarb wrote:
       | Let's be honest: obviously the companies that have put a lot of
       | money in it will try to put entry barriers, like licenses to
       | linear algebra or other requirements by law. It is not to benefit
       | humanity, but to monopolize their industry and prevent new
       | participants. We shouldn't allow that kind of restrictions, just
       | because people that doesn't understand how it works are afraid of
       | a killer robot visiting them by night.
        
       | roody15 wrote:
       | Who watches the watchers? Does anyone truly believe the US and
       | it's agencies could responsibly "regulate" AI for the greater
       | good?
       | 
       | Or would democratizing and going full steam ahead with open
       | source alternatives be better for the greater good.
       | 
       | With the corporate influence over our current government
       | regulatory agencies my personal view is open source alternatives
       | are societies best bet!
        
         | UberFly wrote:
         | While I do think we have to be wary of the powerful human
         | manipulation tools that AI will produce, they (govt regulators)
         | would never understand it enough to accomplish anything other
         | than stifle it.
        
       | nerdix wrote:
       | Well, now we know how they plan to build the moat.
        
       | Animats wrote:
       | This is a diversion from the real problem. Regulating AI is
       | really about regulating corporate behavior. What's needed is
       | regulation along these lines:
       | 
       | * Automated systems should not be permitted to make adverse
       | decisions against individuals. This is already law in the EU,
       | although it's not clear if it is enforced. This is the big one.
       | Any company using AI to make decisions which affect external
       | parties in any way must not be allowed to require any waiver of
       | the right to sue, participate in class actions, or have the case
       | heard by a jury. Those clauses companies like to put in EULAs
       | would become invalid as soon as an AI is involved anywhere.
       | 
       | * All marketing content must be signed by a responsible party. AI
       | systems increase the amount of new content generated for
       | marketing purposes substantially. This is already required in the
       | US, but weakly enforced. Both spam and "influencers" tend to
       | violate this. The problem isn't AI, but AI makes it worse,
       | because it's cheaper than troll farms, and writes better.
       | 
       | * Anonymous political speech may have to go. That's a First
       | Amendment right in the US, but it's not unlimited. You should be
       | able to say anything you're willing to sign.[1] This is, again,
       | the troll farm problem, and, again, AIs make it worse.
       | 
       | That's probably enough to deal with the immediate problems.
       | 
       | [1] https://mtsu.edu/first-amendment/article/32/anonymous-speech
        
       | tibbydudeza wrote:
       | Worst idea ever - what next - license to do GPU's or CPU
       | architectures ???. Software patents all over again.
        
       | wellthisisgreat wrote:
       | Capitalist, a venture one, for worse is trying to use
       | administrative resource to protect his company.
       | 
       | As far as entrepreneurial stuff goes, running to gov to squeeze
       | other companies when you are losing is beyond unethical.
       | 
       | There is something just absolutely disgusting about this move, it
       | taints the company, not to mention the personality
        
         | no4120k1 wrote:
         | [dead]
        
       | wlitlwijatli wrote:
       | [flagged]
        
       | xmlblog wrote:
       | Rent-seeking, anyone?
        
       ___________________________________________________________________
       (page generated 2023-05-16 23:00 UTC)