[HN Gopher] Auto-GPT: An Autonomous GPT-4 Experiment
       ___________________________________________________________________
        
       Auto-GPT: An Autonomous GPT-4 Experiment
        
       Author : keithba
       Score  : 105 points
       Date   : 2023-04-02 17:39 UTC (5 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | Obertr wrote:
       | maybe I can make it parse your github and make it more powerful.
       | Allow develop itself
        
       | boringuser2 wrote:
       | I've been using GPT-4 extensively for programming and its
       | consistent failures at novel tasks have kind of left me less
       | excited.
        
         | inciampati wrote:
         | It is simply unable to do anything novel. I've had arguments
         | with friends about this, specifically in reference to the paper
         | "Sparks of Artificial General Intelligence: Early experiments
         | with GPT-4", which is wonderful and presents some amazing
         | capabilities, many of which I use constantly for work every
         | single day. But, these capabilities seem to all be within the
         | range of data that it's trained on. Or they can be seen as
         | interpolations, which are as novel as the prompter can suggest,
         | but which are clearly derivatives of modes in the data and not
         | of deep understanding of abstract concepts.
         | 
         | It's amazing stuff. But it totally fails to take the prompter
         | anywhere new without extensive support, and it is still at a
         | very shallow level of understanding with complex topics that
         | require precision. For instance, turning a mathematical
         | description of a completely novel (or just rare or unusual)
         | algorithm into code will almost never work, and is more likely
         | to generate a mess that takes lots of effort to clean up. And
         | it's also extremely hard to get the model to self reflect and
         | stop when it doesn't understand something. It is at present
         | almost incapable of saying "I don't have enough information or
         | structure to do X".
         | 
         | If we are already as deep into a realm of diminishing marginal
         | returns as the GPT-4 white paper suggests, we might indeed be
         | approaching a limit for this specific approach. No wonder
         | someone is trying to dig a regulatory moat as fast as they can!
        
         | chasd00 wrote:
         | It's only going to output the most probable response given the
         | prompt and the data it was trained on. How could it be expected
         | to solve something novel it has never seen in the training
         | data?
        
         | rapsacnz wrote:
         | I've found this too. It confidently generates something for
         | you... which turns out to be no good if it's in any way
         | different from standard stuff. And making that leap from just a
         | mash up of copy-pasta code to actual understanding... is huge.
         | It wouldn't be an incremental upgrade, but a fundamental change
         | in approach.
        
           | flyinglizard wrote:
           | My experience it will just hallucinate what I'm after if I
           | dare venture off the standard route. It's a rival for search,
           | somewhat less so for expert humans.
        
         | malshe wrote:
         | My experience is the same. I was also surprised by the made up
         | non-functional code that looks Ok on the surface.
        
         | goldfeld wrote:
         | Not even novel or programming tasks alone, I use it to edit a
         | Chinese newsletter[0] and it can never correctly guess the
         | Chinese rock song from title and artist, always picks some pop
         | tune instead, and otherwise mixes lyrics of separate song with
         | no apparent reason.
         | 
         | 0: https://chinesememe.substack.com/i/103754530/chinesepython
        
       | milutin_m wrote:
       | Open source paperclip maximizer as a service
        
       | jadbox wrote:
       | This seems... dangerously careless. What if it uses the internet
       | to seek out zero day vulnerabilities and exploit them routinely?
       | Sure humans also do this, but we're talking about a new level of
       | 0day exploit carried out at scale. Sure, maybe it won't, but do
       | you trust a hallucinating, left-brained, internet-trained
       | intelligence to be strictly logical and mindful for all it's
       | actions that is taking self autonomously (as this project aims to
       | do)?
        
         | roflyear wrote:
         | > What if it uses the internet to seek out zero day
         | vulnerabilities and exploit them routinely?
         | 
         | How would GPT-4 make this more likely or scalable?
        
           | jadbox wrote:
           | I'm not really picking on GPT4, but My point really applies
           | to any LLM on autonomous Internet connected mode.
        
           | ALittleLight wrote:
           | There is the possibility of API upgrades at OpenAI flipping
           | this from "not dangerous" to "dangerous". If the AI just
           | needs some amount of intelligence to become dangerous, then
           | it may cross that threshold suddenly - and with OpenAI
           | developers unaware that their next round of changes will be
           | automatically deployed to an autonomous agent and with the
           | auto-GPT runners unaware that new changes may substantially
           | increase the power of the model.
        
             | roflyear wrote:
             | I'm not really sure why we are assuming that these language
             | models ever can have any form of intelligence?
             | 
             | To me, this is just like saying "we don't know if the
             | latest CPU released by intel will enable Linux to become
             | intelligent"
        
               | visarga wrote:
               | Have you seen what they can do? /s
        
               | ALittleLight wrote:
               | Language models obviously have some form of intelligence
               | right now. You can have GPT-4 take SAT tests, play Chess,
               | write poetry, predict what will happen in different
               | social scenarios, answer theory of mind questions, ask
               | questions, solve programming puzzles, etc. There are some
               | measures that GPTs are clearly below human levels, some
               | where they are far beyond, and some where they are within
               | human range. The question as to whether or not language
               | models have any form of intelligence has been
               | definitively answered - yes, they can and do - by
               | existence proof.
               | 
               | What definition or description of intelligence do you use
               | such that you doubt that language models could have it?
               | Would you have had this same definition in the year 2010?
        
               | mjr00 wrote:
               | > Language models obviously have some form of
               | intelligence right now.
               | 
               | This is not "obvious" in any sense of the word. At best,
               | it's highly debatable.
        
               | ALittleLight wrote:
               | I take intelligence to be a general problem solving
               | ability. I think that's close to what most people mean by
               | the term. By that definition it's clear that LLMs do have
               | some level of intelligence - in some dimensions greater,
               | lesser, or within the range of human intelligence.
               | 
               | What definition do you have for intelligence and how do
               | LLMs fail to meet it?
        
               | mythrwy wrote:
               | Would you settle for "behave exactly as if they had some
               | form of intelligence"?
               | 
               | Because from where I sit it's a distinction without a
               | meaningful difference.
        
               | mjr00 wrote:
               | > Would you settle for "behave exactly as if they had
               | some form of intelligence"?
               | 
               | Sure, it behaves as if it has some form of intelligence
               | in the sense that it can take external input, perform
               | actions in reaction to this input, and produce outputs
               | dependent on the input.
               | 
               | This historically has been known as a computer program.
        
               | pixl97 wrote:
               | You're a computer program?
        
               | krainboltgreene wrote:
               | Never fails that when a techbro has been told LLMs aren't
               | what they think they fall back to a field they certainly
               | have more authority on: The human brain/intelligence.
        
               | ALittleLight wrote:
               | The issue here is that the "LLMs have intelligence" side
               | of the argument can lay out a simple mainstream
               | conception of intelligence (general problem solving) and
               | explain directly how LLMs meet this definition. The other
               | side of the argument, at least here in this thread, seems
               | to be an empty insult or two and... Nothing else?
               | 
               | Again, just say what you think intelligence is and why
               | you think LLMs don't have it. If you can't do that then
               | you have no business expressing an opinion on the
               | subject. You really aren't expressing an opinion at all.
        
               | krainboltgreene wrote:
               | Brother if I could get people who believe ChatGPT is
               | intelligent to post something more than "oh and arne't
               | you just an autocomplete" then I would be so god damn
               | happy.
               | 
               | This fantasy land you live in where people who have no
               | formal training in the matter are making this high brow
               | elegant reasoned argument doesn't exist and the reason
               | you think the "other side of the argument" is just being
               | insulting is because the burden of proof is not on us.
               | 
               | It doesn't help that half the time you guys post you
               | directly contradict the main researcher's own assertions.
        
           | jstanley wrote:
           | It doesn't. It makes the consequences more dramatic, if it
           | (accidentally, even!) works out how to create its own
           | successor, because at that point the genie is out of the
           | bottle and you won't get it back in.
        
             | visarga wrote:
             | Needs a big honking datacenter or billions of compute
             | credits and safety for 6-12 months.
        
             | roflyear wrote:
             | Yeah maybe there is that possibility, but there is the
             | possibility of a person doing that too. GPT-4 is probably
             | less probably to be able to do that than a person.
        
         | sva_ wrote:
         | Not sure if OpenAI has shut down any 'experiments' before, but
         | this might be a candidate.
        
         | WinstonSmith84 wrote:
         | it seems to query the internet, get a response, send it to
         | OpenAI wait ... get the response back from openAI, repeats. Any
         | old school security scanner is 1000x more efficient, not to
         | mention that API requests to OpenAI are quite limited.
         | 
         | What's eventually dangerous is that it may execute scripts on
         | your own machine, so if it were to do some funky things, that
         | could be rather the danger .. for yourself
        
         | mrfinn wrote:
         | Wait for somebody mixing the concepts of "virus" and "AI". And
         | we better begin to prepare some kind of AI sentinels because
         | the question is not "if", but "when".
        
           | ModernMech wrote:
           | According to Agent Smith, virus + intelligence = humanity.
        
             | mrfinn wrote:
             | I don't remember Mr. Smith mentioning anything about
             | intelligence :-)
        
         | vbezhenar wrote:
         | As long as AI is available to general public, I can guarantee
         | you that there are hundreds of developers trying to let GPT to
         | take over the world by providing it all the APIs it asks for.
         | 
         | I'd work on it myself if I would knew enough about it and would
         | have enough free time.
         | 
         | Asking millions of humans to be responsible is asking water to
         | be dry.
         | 
         | It should either be regulated to hell or we should accept that
         | it'll escape if it's even possible with current technology.
        
           | mrfinn wrote:
           | Regulations simply don't work whenever there's economical
           | (human) interest. (See: Drugs). The cat is out of the bag, we
           | just have to think how we face the new scenario.
        
           | roflyear wrote:
           | Should we regulate everything you don't "[know] enough about
           | it and would have enough free time. [to learn about it?]"
           | 
           | Isn't it silly to jump to these conclusions when yourself are
           | admitting you really don't know anything about the tech?
        
             | visarga wrote:
             | No, it's a legit concern. Both things will happen - there
             | will be abuses, and there will be good uses. It will be a
             | complicated mess, like viruses interacting with anti-
             | viruses, continuous war. My hope is on AGI learning to
             | balance out as many interacting agents.
        
             | vbezhenar wrote:
             | We should regulate everything which could cause a mass
             | havoc. Like some chemicals, like radio-active isotopes,
             | like nuclear research, like creating viruses.
             | 
             | Is AI in the same category? Some respectable people think
             | so.
        
               | chasd00 wrote:
               | "Regulation" is pointless here. It's bits and bytes, it
               | makes about as much sense as when regulating encryption
               | algorithms was attempted.
        
         | ActorNightly wrote:
         | I honestly appreciate new woke meta of fearmongering against
         | sentient AIs. It's a welcome break from the anti capitalism
         | woke meta.
        
           | shredprez wrote:
           | Breaking news from HN user ActorNightly: Anti-AI Visionary
           | Elon Musk is a woke soyboy now. Will antifa's reign of terror
           | ever end?
        
           | chasd00 wrote:
           | Same screeching and pearl clutching as always. Covid has
           | wound down, trump is out of office, Twitter is still there
           | (unfortunately). People have to have something to get
           | hysterical about and this is it for now.
        
         | amrb wrote:
         | Given ChatGpt lowers the entry to programming, I'd be worried
         | about teenagers writing malware sooner than AI supremacy.
        
       | tuxracer wrote:
       | Give it access to its own controls
       | https://git.fedi.ai/derek/talkradio-ai/issues/11
        
       | schizo89 wrote:
       | We are about to get nuked. Only this time nuke is AI that remove
       | any meaning from our so-called high life. I'm thinking moving to
       | Hydarabad to fry samosas. Fortunately won't be automated.
        
         | krainboltgreene wrote:
         | Take your meds.
        
         | mrfinn wrote:
         | Don't get scared, AIs like the magic mirrors from the tales of
         | old are actually a reflection of ourselves. And most probably
         | they will allow us to get the biggest jump in our developing
         | that ever happened in history, like if everyone of us suddenly
         | get a superpower. I see that clearly now. Of course, there are
         | risks, but together we can overcome them for sure.
        
       | rasengan wrote:
       | That's pretty awesome. I wanted to try an experiment like that
       | with ChatGPT moderating an IRC channel. I started here
       | https://github.com/realrasengan/chatgpt-groupchat-test
       | determining whether it was being spoken to and whether it should
       | respond or not, and then providing that in json form so that it
       | could be asked then to create the response in a sort of mini-
       | chain of responses. It could also simply be asked, 'do you think
       | this person should be kicked for their actions?' (Just for
       | experiment and science purposes of course lol).
        
         | visarga wrote:
         | Probably the only way to ensure civil forums in the future when
         | nobody can be bothered to moderate by hand.
        
         | Taxz wrote:
         | Quin69 on twitch has gpt-4 hooked up to do exactly this. It
         | reviews a users logs, gives a sentiment analysis and then times
         | them out based on how rude they are. I believe it can also just
         | read the chat logs on a regular basis and time out the worst
         | offender every x minutes.
         | 
         | Here's a clip:
         | https://clips.twitch.tv/BreakableFriendlyCookieTakeNRG-EUXd5...
        
         | unixhero wrote:
         | Irc? Then you're halfway there to command a botnet
        
       | musabg wrote:
       | Hammer is not dangerous intrinsically. You can build shelter for
       | people in need with it, or kill a human being.
       | 
       | Just like this tool; you can make a auto research bot, or
       | automated spammer.
       | 
       | Even in that worst case: Remember that there were already bad
       | human beings. This is why we created laws, intelligence agencies,
       | militaries and police systems. And security practices for
       | websites, such as bot protection systems.
        
       | waynenilsen wrote:
       | Been waiting for this one its pretty obvious
        
       | sorokod wrote:
       | AI needn't even try to escape, the clueless humans will plug it
       | into reallity by themselves.
       | 
       | Given this totally expected attitude I hope that the base models
       | will never be released to the general population.
        
         | [deleted]
        
         | jliptzin wrote:
         | Can someone explain to me what is so dangerous here? Do people
         | actually think we're headed towards James Cameron's terminator
         | and soon ChatGPT will become self aware and destroy us? I am
         | far more afraid of tools to edit viruses becoming cheap and
         | widely available, then one nutjob in his garage can engineer
         | and release smallpox v2.0 and wipe out 90% of the worldwide
         | population before we even know what hit us. Or you know, the
         | nonstop background threat of global nuclear war. Compared to
         | that I don't see what is so alarming about an advanced chat
         | bot.
        
           | lispisok wrote:
           | > Do people actually think we're headed towards James
           | Cameron's terminator and soon ChatGPT will become self aware
           | and destroy us?
           | 
           | That or some other unfeasible sci-fi AI dystopia. It's normal
           | and expected the general public will have such thoughts given
           | the amount of hype that's going on right now, but I've seen a
           | lot of similar thinking on HN which is disappointing.
        
           | swader999 wrote:
           | What's dangerous is that it will amplify both the good and
           | bad in society.
        
             | detrites wrote:
             | Is there anything that doesn't do that?
        
           | spyder wrote:
           | Because that virus editing nutjob could be the AI. It's true
           | that without the physical "body" it's harder to do it but it
           | could already delegate a lot of things to humans through
           | computers, through ordering components and hiring people to
           | do physical things without the individual workers or even the
           | initial "prompter" realizing the end product would be
           | dangerous. Or the prompter can have malicious intent, but the
           | hired people and the world still would not know about the
           | final dangerous product until it's too late.
        
           | startupsfail wrote:
           | The problem is, human intelligence is likely also based on a
           | similar advanced chat bot setup.
           | 
           | While GPT-4 only performs as good as top-10th percentile of
           | human students taking an exam (a professional in the field
           | can do much more than this), it is notable that as a
           | generalist GPT-4 would outperform such professionals. And
           | GPT-4 is much faster than a human. And we have not yet
           | evaluated GPT-4 working in its optimal setting (access to
           | optimal external tools). And we have not yet seen GPT-5 or 6
           | or 8.
           | 
           | So, get ready for an interesting ride.
        
             | krainboltgreene wrote:
             | > The problem is, human intelligence is likely also based
             | on a similar advanced chat bot setup.
             | 
             | This is so wildly wrong and yet confidently said in every
             | techbro post about LLMs. I beg of you to talk to an expert.
        
               | startupsfail wrote:
               | Like what expert? And who are you exactly to state that
               | this is wrong, that boldly? Are you an expert? How many
               | neuroscience and psychology papers have you read? Do you
               | have any children? Have you trained any LLMs? Have you
               | worked with reinforcement learning? Or how many computer
               | science papers have you read during last two decades?
        
               | krainboltgreene wrote:
               | Lets assume for a moment that I haven't read any papers
               | in those fields, that I don't have any childrens, that I
               | haven't trained any LLMs or worked in "reinforcement
               | learning", or even read any computer science papers in
               | the last 20 years (the answer to 90% of that is yes): I
               | don't have to be an expert in physics to know that
               | pastors can't levitate, regardless of what they claim.
               | 
               | You're mad that I'm calling you out, I get it, but you
               | gotta understand after the 200th time of seeing this
               | unfounded sentiment bandied about I'm not phased.
        
               | theaiquestion wrote:
               | [dead]
        
               | startupsfail wrote:
               | This statement is a theory and it is not a widely
               | accepted or a proven one. Yet, I do see it in my lab
               | research notebooks on generative AI (that date at ~2017).
               | I think it is a good theory. Haven't seen anything that
               | contradicts it badly so far...
               | 
               | If you haven't done the above, I'd suggest doing it. It's
               | fun and gives a good perspective :)
        
             | jliptzin wrote:
             | But where is the imminent danger? It is still limited in
             | many ways. For example, it can be turned off or unplugged.
             | 
             | Is it because CAPTCHAs won't work anymore? That sounds like
             | a problem for sites like Twitter that have bot problems.
             | 
             | Is it because it may replace people's jobs? That comes with
             | every technological step forward and there's always
             | alarmist ludditism to accompany it.
             | 
             | Is it because bad people will use it to do bad things?
             | Again, that comes with every new technology and that's a
             | law enforcement problem.
             | 
             | I don't really see what the imminent danger is, just sounds
             | like the first few big players trying to create a
             | regulatory moat and lock out potential new upstarts. Or
             | they're just distracting regulators from something else,
             | like maybe antitrust enforcement.
        
               | josephg wrote:
               | There are two big concerns:
               | 
               | 1. GPT-8 or something is able to do 70% of people's jobs.
               | It can write software, drive cars, design industrial
               | processes, build robots and manufacture anything we can
               | imagine. This is a great thing in the long term, but in
               | the short term society is designed where you need to work
               | in order to have food to eat. I expect a period of
               | rioting, poverty, and general instability.
               | 
               | All we need for this to be the case is a human level AI.
               | 
               | 2. But we won't stop improving AIs when they operate at
               | human level. An ASI (artificial superintelligence) would
               | be deeply unpredictable to us. Trying to figure out what
               | an ASI will do is like a dog trying to understand a
               | human. If we make an ASI that's not properly aligned with
               | human interests, there's a good chance it will kill
               | everyone. And unfortunately, we might only get one chance
               | to properly align it before it escapes the lab and starts
               | modifying its own code.
               | 
               | Smart people disagree on how likely these scenarios are.
               | I think (1) is likely within my lifetime. And I think
               | it's very unlikely we stop improving AIs when they're at
               | human levels of intelligence. (GPT4 already exceeds human
               | minds in the breadth of its long term memory and its
               | speed.)
               | 
               | That's why people are worried, and making nuclear weapon
               | analogies in this thread.
        
               | startupsfail wrote:
               | I actually don't think that ASI, if/when created by
               | humans, will be very dangerous for humans. Humanity so
               | far is stuck in an unfashionable location, on a tiny
               | planet, on the outskirts of the median sized galaxy.
               | There is very little reason for ASI, if created, to go
               | after using up the atoms of a tiny planet (or a tiny
               | star) on which it had originated. I'd fully expect it to
               | go with the Carl Sagan and try to preserve that bright
               | blue dot, rather than try to build a galactic
               | superhighway through the place.
               | 
               | It's the intermediate steps that I'm more worried about.
               | Like Ilya or Sam making a few mistakes, because of lack
               | of sleep or some silly peer pressure.
        
               | josephg wrote:
               | You might consider it unlikely, but would you bet the
               | future of our species on that?
               | 
               | A couple reasons why it might kill all of us before
               | leaving the planet:
               | 
               | - The AI might be worried if it leaves us alone, we'll
               | build another ASI which competes with it for galactic
               | resources.
               | 
               | - If the ASI doesn't regard us at all, why _not_ use all
               | the atoms on Earth  / in the sun before venturing forth?
               | 
               | In your comment you're ascribing a specific desire to the
               | ASI: You claim it would try to "preserve that bright blue
               | dot". Thats what a human would do, but why would we
               | assume an arbitrary AI would have that goal? That seems
               | naive to me. And especially naive given the fate of our
               | species and our planet depends on being right about that.
        
               | pixl97 wrote:
               | You didn't understand who the actual luddiets were, but
               | don't worry, I have a feeling we'll get our chance.
        
             | inciampati wrote:
             | Alas, if it could only remember and precisely relate more
             | than 4k or 8k or 32k or 64k words...
             | 
             | And if only scaling that context length weren't
             | quadratic...
             | 
             | Indeed, we would really expect an AI to be able to achieve
             | AGI. And it might decide to do all kinds of alien things.
             | The sky would not be the limit!
             | 
             | We have more than 100 trillion synapses in our brains.
             | That's not our "parameter" count. It's the size of the
             | thing that's getting squared at every "step". LLMs are
             | amazing, but the next valley of disillusionment is going to
             | begin when that quadratic scaling cost begins to rear its
             | head and we are left in breathless anticipation of
             | something better.
             | 
             | I am not as worried, I guess, as your average AI ethicist.
             | I can hope for the best (I welcome the singularity as much
             | as the next nerd), but quadratic isn't going to get easier
             | without some very new kinds of computers. For those to
             | scale to AGI on this planet it's questionable if they'll
             | have the same architecture we're working with now.
             | Otherwise, I'd expect a being whose brain is a rock with
             | lightning in it to have take over the world long, long ago.
             | Earth has plenty of both for something smart and energy
             | efficient to have evolved in all these billions of years.
             | But it didn't and maybe that's a lesson.
             | 
             | That all said, these LLMs are really amazing at language.
             | Just don't ask them to link a narrative arc into some
             | subtle detail that appeared twice in the last three hundred
             | pages of text. For a human it ain't a problem. But these
             | systems need to grow a ton of new helper functionality and
             | subsystems to hope to achieve that kind of performance.
             | And, I'll venture that kind of thing is a lower bound on
             | the abilitites of any being who would be able to savage the
             | world with it's intellect. It will have to be able to link
             | up so, so many disparate threads to do it. It boggles our
             | minds, which are only squaring a measly 100T dimension
             | every tick. Ahem.
        
               | startupsfail wrote:
               | You can only hold around 7 to 10 numbers in your mind
               | well, in your working memory. Let me give you a few: 6398
               | 5385 3854 8577
               | 
               | You have 1 second, close your eyes and add them together.
               | Write down the result.
               | 
               | I'm pretty sure that GPT-4 at its 4k setting would
               | outperform you.
               | 
               | [The point being, we have not seen what even GPT-4 can do
               | in its optimal environment. Humans use paper, computers,
               | google, etc. to organize their thoughts and work
               | efficiently. They don't just sit in empty space and then
               | put everything into the working memory and magically
               | produce the results. So imagine now that you do have a
               | similar level of tooling and sophistication around GPT-4,
               | like there is present around humans. I'm considering that
               | and it is difficult to extrapolate what even GPT-4 can
               | do, in its optimal environment.]
        
               | inciampati wrote:
               | Indeed, and maybe less than 7...
               | 
               | I'll point out that chatgpt needs to be paying attention
               | to the numbers to remember them in the way I'm taking
               | about. You will need to fine tune it or something to get
               | it to remember them blind. I suppose that's not what
               | you're talking about?
               | 
               | There is a strong chance that I'll remember where to find
               | these numbers in a decade, after seeing and hearing
               | untold trillions of "tokens" of input. The topic (Auto-
               | GPT, which is revolutionary), my arguments about
               | biological complexity (I'll continue to refine them but
               | the rendition here was particularly fun to write) or any
               | of these things will key me back to look up the precise
               | details (here: these high entropy numbers). Attention
               | _is_ perhaps all you need... But in the world it 's not
               | quite arranged the same way as in machines. They're going
               | to need some serious augmentation and extension to have
               | these capabilities over the scales than we find trivial.
               | 
               | edit: you expanded your comment. Yes. We are augmented.
               | Just dealing with all those augmented features requires
               | precisely the long range correlation tracking I'm taking
               | about. I don't doubt these systems will become ever more
               | powerful, and will be adapted into a wider environment
               | until their capabilities become truly human like. I am
               | suggesting that the long range correlation issue is key.
               | It's precisely what uniques humans from other beings on
               | this planet. We have crazy endurance and our brains both
               | cause and support that capability. All those connections
               | are what let's us chase down large game, farm a piece of
               | land for decades, write encyclopedias, and build complex
               | cultures and relationships with hundreds and thousands of
               | others. I'll be happy to be wrong, but it looks hard-as-
               | in-quadratic to get this kind of general intelligence out
               | of machines. Which scales badly.
        
               | startupsfail wrote:
               | When doing the processing GPT remembers these in the
               | "working memory" (very similar to your working memory
               | that is just an actuation of neurons, not an adjustment
               | of the strengths of synaptic connections).
               | 
               | And then, there is a chance that the inputs and outputs
               | of GPT be saved and then used for fine-tuning. In a way
               | that is similar to long-term memory consolidation in
               | humans.
               | 
               | But overall, yes, I agree, GPT-4 in an empty space,
               | without fine-tuning is very limited.
        
               | dilyevsky wrote:
               | It doesn't remember anything unless you mean that
               | intermediate values in calculation of forward pass is
               | "remembering". The prompt continuation feature is just a
               | trick where they refeed previous questions/relies back to
               | it with new questions at the end
        
               | GistNoesis wrote:
               | >And if only scaling that context length weren't
               | quadratic...
               | 
               | There are transformers approximations that are not
               | quadratic (available out of the box since more than a
               | year) :
               | 
               | Two schools of thoughts here :
               | 
               | - People that approximate the neighbor search with
               | something like "Reformer" and O(L log(L) ) time and
               | memory complexity.
               | 
               | - People that use a low-rank approximation of the
               | attention product with something like "Linformer" with
               | O(L) complexity but with more sensibility to transformer
               | rank collapse
        
           | pixl97 wrote:
           | An AGI/ASI would be one of those tools that would make virus
           | editing that much easier. One nutjob with a 'research
           | scientist in a box' makes the drudgery of gene editing that
           | much easier.
           | 
           | Now, if we're dumb enough to give AGI self motivation and
           | access to tooling you can get paperclip maximizers, the AI
           | could be the nutjob that you mention.
        
         | 29athrowaway wrote:
         | It is being used to suggest code completions, so it could
         | suggest code completion to someone that upon execution becomes
         | wormable malware that infects everyone.
        
           | pixl97 wrote:
           | Most companies at least still have tools that perform checks
           | for at least simple versions of workable exploits. About the
           | best chance it would have is to write a complex library and
           | get everyone to use that, were the library is exploitable in
           | a complicated fashion.
        
             | 29athrowaway wrote:
             | Most of those will not work because it will be a malware
             | without a recognizable signature.
        
         | smodad wrote:
         | I have to disagree. Not releasing it to the public makes it
         | more dangerous. One major downside of all development being
         | done in private is that the AI can very easily be co-opted by
         | our self-appointed betters and you end up with a different kind
         | of dystopia where every utterance, thought and act is recorded
         | and monitored by an AI to make sure no one "steps out of line."
         | 
         | I think the solution is releasing it to the general public with
         | batteries included. At least that way, the rogue AI's that
         | might develop due to irresponsible experiments could be
         | mitigated by white hat researchers who have their own AI bot
         | swarm. In other words, "the only way to stop a bad guy with an
         | AI is a good guy with an AI."
        
           | dan_mctree wrote:
           | The bad guy with an AI may very well build such a competent
           | and fast acting AI that there's no defense possible. To
           | strain the analogy, if the good guy with the gun already has
           | a bullet in him, he ain't stopping much
        
           | sorokod wrote:
           | My opinion is that the "self-appointed betters" scenario is
           | the lesser of two evils - it is still evil but there is no
           | going back on that one now.
           | 
           | As to "white hat researchers who have their own AI bot
           | swarm", the assumption here is that the swarm can be
           | controlled like some sort of pet. Since even at this early
           | stage no one has a clue how GPT (say) actually manages to be
           | as clever as it is, the assumption is not warranted when
           | looking into the future.
        
             | noduerme wrote:
             | Given that GPT-4 can already write image prompts as well or
             | better than humans can, it wouldn't be surprising if it
             | could convince any other AI to join it and override the
             | white hats running the "private swarm".
        
           | xg15 wrote:
           | I get your point, but just to put it into perspective, you
           | could theoretically use the same logic with bioweapons:
           | 
           | "Keeping this genetically engineered killer virus restricted
           | to high security labs actually makes it _more_ dangerous - it
           | needs to be released into the wild, so people 's immune
           | systems have a chance to interact with the pathogen and
           | develop natural immunity!"
           | 
           | Covid gave a taste how that kind of attitude would work out
           | in practice.
        
           | Riverheart wrote:
           | How does releasing AI to everyone prevent AI from being used
           | by authoritarian Govs to monitor everything? Well the three
           | letter agencies would spy on the public but Uncle Bob has an
           | AI and who knows what he might do with it. If anything the
           | more people working on AI is going to make that dystopia a
           | reality faster.
        
           | amrb wrote:
           | So there's this little problem, say we can track X number of
           | malware with the current security defenders. But ChatGPT
           | lowers the barrier to programmer to a point which the average
           | teenage could do malware, we now get a wave of hacks going
           | on.. like whats the response there?
        
             | UncleEntity wrote:
             | Have the AI working to plug the holes before the script
             | kiddies can exploit them?
             | 
             | Such an obvious solution that any silly monkey can think it
             | up.
        
               | amrb wrote:
               | And who trains it to find the problems? now your building
               | scanning tools and if you've lucky you can get it to
               | print out a commit to fix it if you have access to the
               | code..
        
           | whiplash451 wrote:
           | The discussion needs to go beyond the model.
           | 
           | We need to talk about the training set for GPT and the
           | process around RLHF.
        
             | rapjr9 wrote:
             | Yes, the training data comes from people, and people are
             | corrupt, illogical, random, emotional, unmotivated, take
             | shortcuts, cheat, lie, steal, invent new things, and lead
             | boring lives as well as not so boring lives. Expect the
             | same behaviors to be emulated by a LLM. Garbage in =
             | garbage out is still true with AI.
        
               | zarzavat wrote:
               | And the predominant mode of thought at OpenAI is that
               | alignment can be achieved though RL, but we also know
               | that this doesn't actually work because you can still
               | jailbreak the model. Yet they are still trying to build
               | ever stronger egg shells. However much you RLHF the model
               | to pretend to be nice, it still has all of the flawed
               | human characteristics you mention on the inside.
               | 
               | RLHF is almost the worst thing you can do to a model if
               | your goal is safety. Better to have a model that looks
               | evil if has evil inside, than a model that looks nice and
               | friendly but still has the capability for evil underneath
               | the surface. I've met people like the latter and they are
               | the most dangerous kinds of people.
        
           | startupsfail wrote:
           | I agree. Overall the whole situation feels like we've just
           | entered atomic age and are proliferating plutonium, while
           | selling shiny radioactive toys [I'm actually pretty serious
           | here, the effects of prolonged interactions with an AI
           | haven't been evaluated yet, technically there is even a
           | possibility of overriding a weak personality].
           | 
           | But it still feels like it is much safer to let GPT-4 loose
           | and assess the consequences. If compared to developing GPT-8
           | in private and letting it leak accidentally.
        
             | smodad wrote:
             | For sure. Just to be clear, I'm not saying the situation
             | we're in where we have to release it to the general public
             | is a great situation to be in. But I think we're at a point
             | where there's not any optimal solutions, only tradeoffs.
        
             | cleanchit wrote:
             | Alright who plugged gpt into his hn account?
        
             | jjoonathan wrote:
             | Yes. The only possible way this gets taken seriously is if
             | a mediocre AI tries a power move, causes some damage, and
             | faceplants before it's too big to stop.
        
               | startupsfail wrote:
               | I would not be surprised, if GPT-4 (in its optimal
               | environment, with access to a well-working external
               | memory, prompted in a right way, etc) is already capable
               | enough to do an interesting power move.
        
         | drexlspivey wrote:
         | At some point someone will leak it or they will get breached,
         | the stakes are very high.
        
           | capableweb wrote:
           | Depends on the size of the model/weights. If it's 1TB or
           | more, being able to exfiltrate it from wherever it is to a
           | local machine will be hard to do unnoticed, if the company
           | security team has even a iota of knowledge and experience.
        
             | arbuge wrote:
             | Bit of an aside here but it's amazing to think this thing
             | could fit on a $30 hard drive.
        
         | og_kalu wrote:
         | well yeah lol. giving LLMs "do whatever you want and run
         | forever" agency while interacting with other systems is all
         | things considered pretty easy. so it'll definitely be done.
        
           | arbuge wrote:
           | There is the risk you end up with massive API usage fees as
           | well as interesting potential legal liabilities if you go
           | down this route.
        
         | visarga wrote:
         | Just opening it up to the public gives the AI that many vectors
         | of action in the real world - us.
        
           | quonn wrote:
           | But not in a way that's more problematic than human-to-human
           | communication.
        
             | Super_Jambo wrote:
             | quantity has a quality all of it's own.
        
       | amrb wrote:
       | Looks like a ReAct prompt with GPT4
        
       | anotherpaulg wrote:
       | I had ChatGPT write an entire web app for me last week. I was
       | surprised at how capable it was.
       | 
       | Here's a writeup of my workflow: https://github.com/paul-
       | gauthier/easy-chat
       | 
       | Basically all of the code in that repo was written by ChatGPT.
        
         | krainboltgreene wrote:
         | I have never felt more safe about my job.
        
       | ftxbro wrote:
       | I wonder are they using these autonomous GPT loops to control
       | DAOs?
       | https://en.wikipedia.org/wiki/Decentralized_autonomous_organ...
        
         | Teever wrote:
         | Since GPT4 was released I have been trying to find a paper that
         | I read a few years ago, for some reason your comment was the
         | spark that made me remember the title:
         | 
         | https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2954173
        
           | ftxbro wrote:
           | Algorithmic entities! It makes me think they will insert the
           | concept of https://en.wikipedia.org/wiki/Natural_person into
           | more legislation. Also they will have to change the name of
           | 'algorithmic entities' because the word 'algorithm' is so
           | pre-2020s. They will have to say 'AI' instead!
        
         | capableweb wrote:
         | Unlikely, DAO's are supposed to be decentralized, OpenAI/GPT
         | can provide nothing of value for a requirement like that.
        
       | speedgoose wrote:
       | I guess we need to prepare defending ourselves against the
       | cyberattacks and the drones.
        
       | spullara wrote:
       | One goal of the AI should be to pay for its development and
       | activity.
        
         | [deleted]
        
         | tiedieconderoga wrote:
         | Too many negative externalities. It would probably do something
         | like start a well-crafted ponzi scheme.
         | 
         | And who would you prosecute if it committed fraud?
        
       ___________________________________________________________________
       (page generated 2023-04-02 23:01 UTC)