https://www.rwkv.com/ RWKV RWKV Language Model * Github * Twitter * Discord RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". It's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctxlen, and free text embedding. And it's 100% attention-free, and a Linux Foundation AI project. v6 7B Demo v7 0.1B Demo WebGPU Demo RWKV paper RWKV-Projects RWKV-LM Training RWKV (and latest developments) RWKV-Runner RWKV GUI with one-click install and API RWKV pip package Official RWKV pip package RWKV-PEFT Finetuning RWKV (9GB VRAM can finetune 7B) RWKV-server Fast WebGPU inference (NVIDIA/AMD/Intel), nf4/int8/fp16 More... (400+ RWKV projects) Misc RWKV raw weights All latest RWKV weights RWKV weights HuggingFace-compatible RWKV weights RWKV-related papers RWKV wiki Community wiki RWKV-Papers ALL General Lang. Image 3D / 4D Seq / RL Audio RWKV-7 explained [RWKV-7] RWKV-7 illustrated [rwkv-x070] RWKV-6 illustrated [rwkv-x060] RWKV.com Github Twitter Discord