Post B30G3pm6HF7cB69I48 by picofarad@noauthority.social
 (DIR) More posts by picofarad@noauthority.social
 (DIR) Post #B2zNALdTEmCjFdCoFc by picofarad@noauthority.social
       2026-02-04T20:34:12Z
       
       0 likes, 0 repeats
       
       messing with Ace-Step 1.5 turbo local with the 1.7B model
       
 (DIR) Post #B2zNGC4bhd0NKcgao4 by picofarad@noauthority.social
       2026-02-04T20:35:16Z
       
       0 likes, 0 repeats
       
       original song
       
 (DIR) Post #B2zR07ewkGp6pFH9Fo by mycal@noauthority.social
       2026-02-04T21:17:11Z
       
       0 likes, 0 repeats
       
       @picofarad I got that running on my low power linux box on a 3050  Very interesting.
       
 (DIR) Post #B2zTScn0nWGpOGlb0a by picofarad@noauthority.social
       2026-02-04T21:44:44Z
       
       0 likes, 0 repeats
       
       i can't stop
       
 (DIR) Post #B30DWt5Sk1cUzDMqkC by picofarad@noauthority.social
       2026-02-05T06:20:55Z
       
       0 likes, 0 repeats
       
       @mycal isn't very big. Hilarious that images, video, and audio are relatively small models. I'm sure there's some massive merges now; however, most of the models i have are 4-8GB.LLM models? 1TB and up (qwen is considered small at something like 800GB)If a picture is worth a thousand words then the LLMs should be 4-8x larger, right?
       
 (DIR) Post #B30EsfPEO7cIscYyfI by mycal@noauthority.social
       2026-02-05T06:36:04Z
       
       0 likes, 0 repeats
       
       @picofarad on the small GPU things are very limited on ACE, there is a major GPU memory leak between the sample phase and song generation, so tough on the 3050, but but on the 3090 seems much better with the larger model.but as general local modles that I can run used to love openAI oss 20b,  then deepseek-r1-distill-qwen-32b, but now I think nvidia/nemotron-3-nano is about the best.  This is the worst its ever going to be.  Ive been told to try miromind-ai.mirothinker-v1.5-30b
       
 (DIR) Post #B30F7I5PN9gStnGSBs by picofarad@noauthority.social
       2026-02-05T06:38:44Z
       
       0 likes, 0 repeats
       
       @mycal i noticed the memory leak in that suddenly the last part before mp3-ify was slow, but i've experienced this with *good* ML software. it just starts shuttling to the main RAM. Luckily i think all of the "stuff" needed to continue the workflow (as the intention is to collaborate with the AI rather than 1-shot it) is in the .json and the mp3 output.Where and how do i use miromind ? is it micromind?does it work with ace-step?tyvm
       
 (DIR) Post #B30FJYnJ4uNSC1eF84 by picofarad@noauthority.social
       2026-02-05T06:40:56Z
       
       0 likes, 0 repeats
       
       @mycal and as far as LLMs go, i've never had luck using one for doing an entire project from start to finish, *except* the 80B qwen, which took something like 4 hours to make a single-file 2048 clone that ran in any browser with js support.I thought about going in to debt to get a blackwell or a threadripper/epyc with enough ram to load the 650B qwen model, but as i was hemming and hawing ram prices quintupled. lol.
       
 (DIR) Post #B30FZDqOjonjqnYvFw by mycal@noauthority.social
       2026-02-05T06:43:46Z
       
       0 likes, 0 repeats
       
       @picofarad No I'm just talking in general, I've been trying to run my open Claw (Clawdbot) on local models  Getting inconstant results.   But in ACE the sample part could be done much better with the good local model or even a foundational one with a good prompt.The magic happens with the DiT, and the better the input the better the output.   I'd like to train the DiT, but not sure I'll get to that any time soon.
       
 (DIR) Post #B30FoYEhCgi7RGEk1Q by mycal@noauthority.social
       2026-02-05T06:46:33Z
       
       0 likes, 0 repeats
       
       @picofarad You should get on the x/LocalLLaMA group, lot going on there, or there was up until a few days ago.   Best local model configs I've seen
       
 (DIR) Post #B30G3oRrD1Kq41fbsG by mycal@noauthority.social
       2026-02-05T06:48:11Z
       
       0 likes, 0 repeats
       
       @picofarad FYI I'm sure you have seen it, but LM Studio is probably the best GUI based software for rapid experiments with local models.
       
 (DIR) Post #B30G3pm6HF7cB69I48 by picofarad@noauthority.social
       2026-02-05T06:49:17Z
       
       0 likes, 0 repeats
       
       @mycal yeah i have lms and comfyui for LLM and visual stuff respectively. I know comfy can do other things (including ace-step v1 currently); but i have my model directories set up in such a way that it'd waste a half terabyte.