Posts by mycal@noauthority.social
(DIR) Post #Aw4J75qnkHZGknU9Sq by mycal@noauthority.social
2025-07-12T20:37:39Z
0 likes, 0 repeats
@PNS 'Merica right there.
(DIR) Post #Aw4M5fEXWOmpm7bBeS by mycal@noauthority.social
2025-07-12T20:10:13Z
0 likes, 0 repeats
@Quentel @Cosmic @CSB @verita84 He was there
(DIR) Post #AwIid53JjzCuY6EZNo by mycal@noauthority.social
2025-07-19T19:29:29Z
0 likes, 0 repeats
@PNS steel cans were also lined fyi.
(DIR) Post #AwKha3Ppeu7wxvg68O by mycal@noauthority.social
2025-07-20T18:27:11Z
0 likes, 0 repeats
@picofarad just started using it. For most things I like docker. But kubernetes seems like it will scale much better over many nodes where docker can be a pain. The reality is hard metal installs just suck, so anything is better than that.
(DIR) Post #AwNVr2OG6m4cjU9arY by mycal@noauthority.social
2025-07-22T02:59:55Z
0 likes, 0 repeats
@picofarad define sessions, I'm calling bullshit on this. Best I know is meta's signaling servers, 200-500K concurrent connections, but there are concurrent low bandwidth mostly idle connections with little churn. 3M per second churn not happening on any one server.
(DIR) Post #AwPTmTFizofl0LmCqe by mycal@noauthority.social
2025-07-23T01:46:07Z
0 likes, 0 repeats
@picofarad Nothing I see on this thread says anything close to 3M per second on TCP connections on one server.
(DIR) Post #AwPaWaS79uZmNdwdfs by mycal@noauthority.social
2025-07-23T03:01:39Z
0 likes, 0 repeats
@picofarad 2M concurrent connections is nothing, but 2M per second is a whole other thing. Still bull shit.
(DIR) Post #AySJPA8zt2guqWSY7c by mycal@noauthority.social
2025-09-22T06:17:20Z
1 likes, 1 repeats
@yukiame
(DIR) Post #B05x3Gq9BzXgfW4OOG by mycal@noauthority.social
2025-11-08T23:16:30Z
0 likes, 1 repeats
(DIR) Post #B06FhpO9xXCQZrT3h2 by mycal@noauthority.social
2025-11-08T23:30:32Z
1 likes, 0 repeats
(DIR) Post #B06Fx6ykqEA9Tas9i4 by mycal@noauthority.social
2025-11-08T23:25:02Z
1 likes, 0 repeats
(DIR) Post #B0Fk1mXylTyoI1QgaW by mycal@noauthority.social
2025-11-15T00:26:59Z
0 likes, 1 repeats
From one of the Founders of Burningman
(DIR) Post #B1ucMWHVqUZjMJTBAW by mycal@noauthority.social
2026-01-03T08:07:19Z
1 likes, 0 repeats
@Chzikken_1486 seems we have invaded
(DIR) Post #B2s0R1FecWNxtxKJU0 by mycal@noauthority.social
2026-02-01T06:24:29Z
1 likes, 2 repeats
(DIR) Post #B2xOekMdZb9mGqTHSS by mycal@noauthority.social
2026-02-03T21:41:28Z
0 likes, 0 repeats
@eriner That guy creaps me out every time I see him, now more so.
(DIR) Post #B2zR07ewkGp6pFH9Fo by mycal@noauthority.social
2026-02-04T21:17:11Z
0 likes, 0 repeats
@picofarad I got that running on my low power linux box on a 3050 Very interesting.
(DIR) Post #B30EsfPEO7cIscYyfI by mycal@noauthority.social
2026-02-05T06:36:04Z
0 likes, 0 repeats
@picofarad on the small GPU things are very limited on ACE, there is a major GPU memory leak between the sample phase and song generation, so tough on the 3050, but but on the 3090 seems much better with the larger model.but as general local modles that I can run used to love openAI oss 20b, then deepseek-r1-distill-qwen-32b, but now I think nvidia/nemotron-3-nano is about the best. This is the worst its ever going to be. Ive been told to try miromind-ai.mirothinker-v1.5-30b
(DIR) Post #B30FZDqOjonjqnYvFw by mycal@noauthority.social
2026-02-05T06:43:46Z
0 likes, 0 repeats
@picofarad No I'm just talking in general, I've been trying to run my open Claw (Clawdbot) on local models Getting inconstant results. But in ACE the sample part could be done much better with the good local model or even a foundational one with a good prompt.The magic happens with the DiT, and the better the input the better the output. I'd like to train the DiT, but not sure I'll get to that any time soon.
(DIR) Post #B30FoYEhCgi7RGEk1Q by mycal@noauthority.social
2026-02-05T06:46:33Z
0 likes, 0 repeats
@picofarad You should get on the x/LocalLLaMA group, lot going on there, or there was up until a few days ago. Best local model configs I've seen
(DIR) Post #B30G3oRrD1Kq41fbsG by mycal@noauthority.social
2026-02-05T06:48:11Z
0 likes, 0 repeats
@picofarad FYI I'm sure you have seen it, but LM Studio is probably the best GUI based software for rapid experiments with local models.