[HN Gopher] Large Language Models Are Human-Level Prompt Engineers
       ___________________________________________________________________
        
       Large Language Models Are Human-Level Prompt Engineers
        
       Author : cainxinth
       Score  : 35 points
       Date   : 2023-04-09 21:07 UTC (1 hours ago)
        
 (HTM) web link (openreview.net)
 (TXT) w3m dump (openreview.net)
        
       | bugglebeetle wrote:
       | I can't find the link to the paper right now, but after reading
       | about how LLMs perform better with task breakdowns, I vastly
       | improved my integrations by having ChatGPT generate prompts that
       | decompose a general task into a series of tasks based on a sample
       | input and output. I haven't needed to make a self-refining system
       | (one or two rounds of task decomposition and refinement resulted
       | in the expected result for all inputs), but I would assume this
       | is fairly trivial and that AIs can do it better than humans.
       | 
       | This is also an area where I expect OpenAI will continue to
       | demolish the competition. The ability to recursively generate and
       | process large prompts is truly nuts. I tried swapping in some of
       | the "high-performing" LLama models and they all choked on
       | anything more than a paragraph.
        
       | og_kalu wrote:
       | Capable enough LLMs are human level for lots of things.
       | Reinforcement learning from ai feedback is a thing (the anthropic
       | claude models use that). Strictly speaking, it's not necessary to
       | have humans in the loop for a lot of these things.
       | 
       | Some are hesitant to admit we've created human level general
       | intelligence but saying otherwise doesn't really hold up to
       | scrutiny.
        
         | lukasb wrote:
         | I see people saying things like this but I have yet to see
         | anyone show data for a non-trivial workflow with human-level
         | accuracy over a wide range of inputs, without a human in the
         | loop.
        
           | api wrote:
           | Counter argument: this may be a matter of incremental
           | improvement. The breakthroughs may all be behind us.
           | 
           | It's like saying you haven't yet seen a 1000 mile range EV
           | for under $100k. No you can't buy such a thing now but it's
           | clearly possible and we know how to get there by just
           | continuing to grind on battery technology and scale
           | manufacturing.
           | 
           | AGI may be at the place a moon landing was in 1950, not where
           | it was in 1900 or 1850.
        
             | HopenHeyHi wrote:
             | You can actually buy a 1000 km range EV for $160k now (MB
             | EQXX). Just as a by the way. :)
             | 
             | At this price point it actually has nothing to do with
             | grinding on battery tech and scale manufacturing, the
             | limiting factor is physics. You can only make it so
             | aerodynamic before you hit diminishing returns or it stops
             | looking like a car. You can only make it so lightweight.
             | And so forth.
             | 
             | This is vaguely as good as it can get and we can say that
             | because we understand how it all works.
             | 
             | LLMs on the other hand invite all kinds of magical thinking
             | around unlimited potential because we poked them with a
             | stick and something interesting comes out it must mean that
             | if we poke it just right we will get an AGI. That just
             | doesn't logically follow from what we _know_ of it _so
             | far_.
        
               | api wrote:
               | I am not convinced we have cracked AGI. I just would no
               | longer make a large bet that we have not.
               | 
               | We won't know until an AGI actually starts to act like
               | one. In other words we won't know until we know and then
               | we are suddenly there.
               | 
               | That doesn't mean I'm on the doomwagon. I feel kind of
               | weird and contrarian but I am just not that afraid of
               | AGI. For the foreseeable future AGI should be much more
               | afraid of us. Imagine having us for gods. (I actually am
               | a bit concerned that we will accidentally put a sentient
               | mind in hell without knowing what we are doing. Would it
               | know how to tell us? Would we care?)
               | 
               | As far as human survival I'm afraid of whatever it is
               | that is going to get us that nobody including myself is
               | thinking about. That's not AGI. That's the alien weapon
               | for which Oumuamua was a spent deceleration stage. (To
               | make up something random. It probably isn't that.)
               | 
               | I disagree about physical limits with EVs. We are not
               | near the physical limits of battery energy density. But
               | it was just a random contemporary example.
        
               | nemo44x wrote:
               | Fwiw, a majority at OpenAI believes GPT5 will achieve
               | AGI, depending on how you define it, according to Sam
               | Altman.
        
             | charcircuit wrote:
             | >by just continuing to grind on battery technology and
             | scale manufacturing.
             | 
             | I think it would be easier to include an ICE and enough
             | fuel oh shh to get you to that 1000 miles mark.
        
               | ThunderSizzle wrote:
               | It'd be cheaper to then get rid of the battery. Then 10k
               | can be your new price limit. Even 1k can get a good
               | enough junk car that can go that far.
        
         | macrolocal wrote:
         | Maybe for oversight and liability.
        
       | kaesar14 wrote:
       | This is pretty alarming tbh. Anyone already making a pivot out of
       | SWE?
        
         | riku_iki wrote:
         | Article is about prompt engineers (3 month old job type), not
         | swes.
        
           | sebzim4500 wrote:
           | I assume his concern is that a proposed solution to avoid
           | losing your job as a SWE is to essentially become a prompt
           | engineer.
        
             | kaesar14 wrote:
             | More or less, but I guess this was to be expected. Why
             | would making prompts be difficult when LLMs are already
             | capable of fairly difficult programming?
        
         | Tade0 wrote:
         | In a sense, Yes - to scoring function engineer.
         | 
         | But in seriousness - language models may be scaling in
         | sophistication exponentially with time, but software
         | engineering problems scale in complexity (on average)
         | exponentially with lines of code. The base of this exponential
         | function isn't large, but it's more than 1.
         | 
         | In the end there's a need for someone who understands what
         | they're doing.
         | 
         | Personally, I use ChatGPT to discover libraries that solve my
         | problems and the ~70% success ratio that I'm seeing with this
         | is enough for me for now.
        
         | drooby wrote:
         | I am a SWE currently making a pivot into business owner.
         | 
         | The future I see is that everyone is about to become a CEO with
         | a personal assistant that can run a business.
         | 
         | So I'm going to start building something of my own starting
         | now.
        
       ___________________________________________________________________
       (page generated 2023-04-09 23:00 UTC)