[HN Gopher] ChatRWKV, like ChatGPT but powered by the RWKV (RNN-...
       ___________________________________________________________________
        
       ChatRWKV, like ChatGPT but powered by the RWKV (RNN-based, open)
       language model
        
       Author : maraoz
       Score  : 78 points
       Date   : 2023-01-19 21:29 UTC (1 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | nl wrote:
       | For those wondering how on earth they are getting decent results
       | from a RNN without long range forgetting, I don't really know
       | either!
       | 
       | But they reference https://arxiv.org/abs/2105.14103 and the
       | bottom section of https://github.com/BlinkDL/RWKV-LM has an
       | explainer.
        
       | [deleted]
        
       | totoglazer wrote:
       | This might be an interesting language model. However people care
       | about ChatGPT entirely due to its quality, which this doesn't
       | demonstrate yet.
        
         | phist_mcgee wrote:
         | The leap in public exposure wasn't so much GPT3 to GPT3.5, it
         | was attaching a clean UI to the model, (with sane defaults) and
         | allowing people to talk to it like a person.
         | 
         | Suddenly it became something 'real' then.
        
           | TJSomething wrote:
           | One of the important parts of ChatGPT over plain GPT-3 is the
           | reinforcement learning from human feedback to ensure
           | alignment, without which it's not quite as good of a product
           | for the public.
        
           | tinsmith wrote:
           | This is a remarkably good take that just didn't dawn on me
           | until I read your comment. Even if ChatGPT had a lesser
           | quality than the current iteration, the fact that they had a
           | way for anyone to _easily_ interact with it really was a
           | homerun, snd can be for any software, really.
        
           | junipertea wrote:
           | They also did reinforcement learning on top of a frozen
           | trained model. It is considerably more than just attaching a
           | UI as that would just finish sentences compared to answering
           | questions. https://huggingface.co/blog/rlhf
        
           | totoglazer wrote:
           | No. ChatGPT's UI is incredibly simple and basically exactly
           | what ever chat bot test repl looks like.
           | 
           | The delta of GPT3 -> ChatGPT is from the expanded context and
           | control the model offers through fine tuning. Eg read the
           | instructgpt paper to see the path on the way to ChatGPT.
        
           | redox99 wrote:
           | It's not just the UI. ChatGPT (which is further finetuned and
           | uses RLHF) definitely produces better output than GPT3,
           | especially without prompt engineering.
        
           | gamegoblin wrote:
           | This is mostly correct. GPT3.5 is better, has a larger
           | context window, etc. But it's a very incremental step above
           | GPT3.
           | 
           | I had wired up GPT3 to a Twilio phone number and made
           | something basically like ChatGPT months before ChatGPT was
           | released -- me and my friends texted it all the time to get
           | information, similar to how people use ChatGPT. The prompt to
           | get decent performance is super simple. Just something like:
           | The following is a transcript between a human and a helpful
           | AI assistant.         The AI assistant is knowledgeable about
           | most facts of the world and provides concise answers to
           | questions.              Transcript:         {splice in the
           | last 30 messages of the conversation}              The next
           | thing the assistant says is:
           | 
           | Over time I did upgrade the prompt a bit to improve
           | performance for specific kinds of queries, but nothing crazy.
           | 
           | Cost me $10-20/mo to run for the low/moderate use by me and a
           | few friends.
           | 
           | Interestingly, for people who didn't know its limitations /
           | how to break it, it was basically passing the turing test.
           | ChatGPT is inhumanly wordy, whereas GPT3 can actually be much
           | more concise when prompted to do so. If, instead of prompting
           | it that it is an AI assistant, you prompt it that it is a
           | close friend with XYZ personality traits, it does a very good
           | job of carrying on a light SMS conversation.
        
           | moffkalast wrote:
           | Well yes, having no context memory, being slightly worse and
           | requiring either a monster rig to run or paying per prompt
           | made it completely and utterly irrelevant.
           | 
           | Even now that it's improved and free to use its actual
           | practical usability is marginal at best given the rate of
           | blatantly wrong info being spewed with 105% confidence at the
           | moment.
        
       ___________________________________________________________________
       (page generated 2023-01-19 23:00 UTC)