Post Az90r1TmtAtBeCYQuO by picofarad@noauthority.social
(DIR) More posts by picofarad@noauthority.social
(DIR) Post #Az90qzSIPkVXMzqDJ2 by picofarad@noauthority.social
1 likes, 0 repeats
"open" "source"every diarization "open source" *does not work* as the documentation suggests. there's always 20 little "fixes" you have to apply. I get it, you don't want people "stealing" your work or whatever, but, and i cannot stress this enoughYOU CLAIMED IT WAS OPEN SOURCE.make your BS work. and no "hurrdockerrrr" isn't the answer, that adds another layer of BS to deal with, and it doesn't solve the problem if you *don't maintain the docker recipe* as a maintainer.asssssssss
(DIR) Post #Az90qzzGRAsZ1FGXh2 by ThatCrazyDude@noauthority.social
1 likes, 0 repeats
@picofarad if whatever you're using is "free", you are the product being sold in 99% of cases. It always was like that and it prolly won't change.Honestly, I don't get why people are still being surprised by this realization.
(DIR) Post #Az90r0NMzYA4E0XnGa by picofarad@noauthority.social
0 likes, 0 repeats
@ThatCrazyDude stable diffusion webui is "free" and "open source" and i download models from wherever, without a login. I'm not the product, there. openai-whisper, same thing, if you fetch the models from somewhere, the source code just uses it. no login required.This isn't like "facebook", this is source available "open" "source"
(DIR) Post #Az90r0QumMzsP0CcnA by picofarad@noauthority.social
0 likes, 0 repeats
i just deleted the whole venv. I don't got time for this. i'd rather just script ffmpeg to "chunk" the files and use whisper-diarization, which i know works for "short" audio files.and what's with all these losers making you convert everything to 16000hz PCM files?what year is this?isn't that 2 lines of code? Why do i have to do it?
(DIR) Post #Az90r0rrACY1kYo8mm by ThatCrazyDude@noauthority.social
0 likes, 0 repeats
@picofarad yeah. And I suppose you actually went through the code, so you can actually say that with any certainty? And even if you did, did you compile the stuff yourself or did you just download the binaries you can't possibly know what's in? 😉
(DIR) Post #Az90r1JrU4wv9PuVRA by picofarad@noauthority.social
0 likes, 0 repeats
@ThatCrazyDude also, stuff like OpenMPT is "Free" and you're not the product, the downloads are anonymous and the software works offline.
(DIR) Post #Az90r1TmtAtBeCYQuO by picofarad@noauthority.social
0 likes, 0 repeats
@ThatCrazyDude regarding which one? Stable Diffusion webui? yes, it runs with no network card installed.all of this crap i am complaining about is python, there's no binary other than the virtual environment python.if python foundation thinks i am the product, the world has bigger problems.
(DIR) Post #Az90r1iK18W0NHM2Yy by picofarad@noauthority.social
0 likes, 0 repeats
furthermore, why do i need a huggingface token? i can download the checkpoints with my web browser just fine. Stop making me rely on other services, if huggingface decides they don't want to be free anymore, i should still be able to load models that i downloaded from there prior to them rug-pulling.
(DIR) Post #Az90r29GOy49ipxYYa by ThatCrazyDude@noauthority.social
1 likes, 0 repeats
@picofarad you sure about that? You honestly think that stable diffusion is all Python code that's being interpreted on the fly like your regular python scripts, not a bunch of precompiled binaries? Good for you, I guess.Might just want to ask yourself how is it that the stuff actually works without making you wait for hours for results, but I guess I'm digressing ;)
(DIR) Post #Az90r2mG3zG3fmChKy by picofarad@noauthority.social
0 likes, 0 repeats
@ThatCrazyDude because it uses my beefy GPU's tensor cores and 24GB of very fast VRAM to process it?I'm not sure what you're getting at, there's no ".exe" in any folders except the venv .\Scripts\ folder. When i run LM studio, with no network, it loads a model off my local hard drive i tell it to, puts it on my GPU's VRAM, and awaits a prompt. If i connect it to the network, i can tell an app on my phone to use *my local openAI endpoint*.yes, i'm 97% sure there's nothing leaking.
(DIR) Post #Az90r2ulYM406ABUbA by picofarad@noauthority.social
0 likes, 0 repeats
If someone were paying me a couple hundred an hour to do this i'd still complain, but I'd have a smile on my face while i did it.
(DIR) Post #Az90r3DCRooD1KoDKa by ThatCrazyDude@noauthority.social
1 likes, 0 repeats
@picofarad I'm not sure if something's leaking or not at all, because I've never tested it, but I'm pretty darn sure that stable diffusion only uses python as the interface. The API, if you will. The vast majority of the core stuff is actually done in C++, Lua and such ;)
(DIR) Post #Az90r3fuj3mGSOF95U by picofarad@noauthority.social
0 likes, 0 repeats
@ThatCrazyDude and if i load a model that doesn't fit in VRAM, it does take "hours", like telling a 70B LLM model "make me a single file 2048 clone i can run in firefox" takes 3.5 hours, because it uses system ram and CPU in addition to the GPU.