Post ArzA2KdGZUU8JP2dgO by virtuous_sloth@cosocial.ca
(DIR) More posts by virtuous_sloth@cosocial.ca
(DIR) Post #Arz1u1Q3K2bwm1O5dg by mntmn@mastodon.social
0 likes, 0 repeats
i'm a bit confused now
(DIR) Post #Arz1yjegnNov7yTtzc by gsuberland@chaos.social
0 likes, 0 repeats
@mntmn sigh, of course
(DIR) Post #Arz20Sq4I1V3YFd05I by mntmn@mastodon.social
0 likes, 0 repeats
am i living under a rock and am i like the only person who can't imagine that everybody desires to run local AI models? or what's going on here
(DIR) Post #Arz29G4n6s6eZjhxCK by tofu@is-a.cat
0 likes, 0 repeats
@mntmn AI you can run locally and store the weights of is the way to go if people are meant to have any sort of control of this tech, otherwise you might end up in a dystopia where everything talks to a server somewhere that can gatekeep what you're doing.
(DIR) Post #Arz29cx83WdxWdIewq by q66@gts.q66.moe
0 likes, 0 repeats
@mntmn pretty sure like 95% of fedi will tell you they don't want to run any ai models
(DIR) Post #Arz29dKAfr4ig653rc by mntmn@mastodon.social
0 likes, 0 repeats
@q66 that is what i hoped to be true... and outside of fedi? isn't that a super niche?
(DIR) Post #Arz2DrUrvHJ9n96Iue by thomholwerda@exquisite.social
0 likes, 0 repeats
@mntmn Review guides and press instructions. The maker of the reviewed product pressures the reviewer into talking about certain topics. Reviewer complies.
(DIR) Post #Arz2OFUnYBuB7Q1LDE by gsuberland@chaos.social
0 likes, 0 repeats
@mntmn there are people who are super into it, but I think those voices are heavily over-represented in tech spaces and the average user doesn't give a crap.
(DIR) Post #Arz2WTXxC3Sj0tQ3w8 by cienmilojos@infosec.exchange
0 likes, 0 repeats
@mntmn Maybe the question isn't "should you?" I think it's pretty damn neat you can do this locally and completely offline. For the savvy of us, it puts more pressure, I think, on companies to provide more on-device options.
(DIR) Post #Arz2cYuKAuMbNRmke8 by boilingsteam@mastodon.cloud
0 likes, 0 repeats
@mntmn not everybody, but once you got used to local AI models you dont go back usually.
(DIR) Post #Arz2mpEALu8Qt9xZs8 by morl0ck@mastodon.social
0 likes, 0 repeats
@mntmn this
(DIR) Post #Arz3FiP4nq2k7dq520 by jpenuchot@mamot.fr
0 likes, 0 repeats
@mntmn You're not the only one who fails to understand how Framework will sell their Desktop, I can tell you đ
(DIR) Post #Arz3L0qjeF3kmPIngG by mntmn@mastodon.social
0 likes, 0 repeats
@thomholwerda twist: the "reviewer" here is the CEO of framework. video link: https://www.youtube.com/watch?v=zI6ZQls54Ms
(DIR) Post #Arz3OmFbrsOwf1jSIi by wolf480pl@mstdn.io
0 likes, 0 repeats
@mntmn not everybody, but the people who do temd to have a lot of money.I guess Framework's idea is to have them subsidize its laptop business, and I won't mind if it turns out that way.
(DIR) Post #Arz3RkFHmPTJJIznLE by byte@awawa.club
0 likes, 0 repeats
@mntmn there was a huge backlash on their stream exactly for the this reason. Amd had a deal with them for new CPUs, and âyou must drink ai kool-aidâ most likely was a clause in that deal. Itâs a shame :/
(DIR) Post #Arz3RkaCWeCaMAmUwS by mntmn@mastodon.social
0 likes, 0 repeats
@byte ahh this makes sense
(DIR) Post #Arz3Zb2hsbmD5Jr7qa by mntmn@mastodon.social
0 likes, 0 repeats
@cienmilojos ok i get that, but if you needed to do that, wouldn't you use a "normal" PC?
(DIR) Post #Arz3huS1bV25ayf7JI by cienmilojos@infosec.exchange
0 likes, 0 repeats
@mntmn you can, I have, I'm assuming some of the larger models require a lot more processing power and storage space to run locally though
(DIR) Post #Arz4Acx5k9Wnimt3rM by woodworker@indieweb.social
0 likes, 0 repeats
@mntmn not everyone, but for me it sound interesting to be able to run an ai model that does not talk to the interwebs or only when i want to. but i am also a bit strange :D
(DIR) Post #Arz4AdJ4QR6ooxAc7M by woodworker@indieweb.social
0 likes, 0 repeats
@mntmn like a local ai chatbot that has access to all my markdown notes
(DIR) Post #Arz4Adf36igpv7SANM by mntmn@mastodon.social
0 likes, 0 repeats
@woodworker is there actually a thing that you can easily install and that does that? (genuine question, i'm not up to date on this stuff)
(DIR) Post #Arz4Ewn5vQlwPOGrU8 by mntmn@mastodon.social
0 likes, 0 repeats
@wolf480pl alright. i would be curious about sales numbers later
(DIR) Post #Arz4UfnqqxhoM57jWq by mntmn@mastodon.social
0 likes, 0 repeats
btw i don't wanna mock framework here, i'm obviously interested in the fine points of computer marketing/strategy/branding and how it's colliding with sociocultural developments and the semantic bits and pieces that fall out of this (sorta like in a particle collider)
(DIR) Post #Arz5LWgjTM9BQxzotU by BitWire@social.tchncs.de
0 likes, 0 repeats
@mntmn @cienmilojos for the big models, GPUs that have enough vram are way more expensive than the framework desktop. I would guess that is the market segment that framework tries to target here, because the integrated GPU can access afaik up to 96GB of the 128gb in the top spec.
(DIR) Post #Arz61pIIgc91IljqTY by ignaloidas@not.acu.lt
0 likes, 0 repeats
@mntmn@mastodon.social @cienmilojos@infosec.exchange the problem is that you want a bunch of high bandwidth memory for the GPU, and most regular GPU's don't have enough for "mid-size" models, whereas with 128GB, you can run pretty damn large models on the Framework Desktop
(DIR) Post #Arz61pdDQqsILdWY4m by mntmn@mastodon.social
0 likes, 0 repeats
@ignaloidas @cienmilojos ah, that's interesting!
(DIR) Post #Arz6UmzcK0LqYuoHhY by naturepoker@genomic.social
0 likes, 0 repeats
@mntmn if this is about their AMD desktop, people have been busy pitching the platform as a 'cheap' way to have a multi hundred gig vram machine.Even leaving out the AI hype (let's say it's aimed at traditional machine learning researchers) I found the emphasis a bit odd... Isn't it advertising AMD's new iGPU set then, and not necessarily the Framework computer?
(DIR) Post #Arz7RVeaQpwns3yOv2 by s_levi_s@mastodon.social
0 likes, 0 repeats
@mntmn well, kinda? Are you surprised people want to use AI models? Or that they want to run local models? The group who wants to use local models is a small subset so it is not everybody. But it is for sure a potential market
(DIR) Post #Arz8HzjNqlFqDFFBAG by computersandblues@post.lurk.org
0 likes, 0 repeats
@mntmn @q66 i imagine it is super niche.but maybe it's one way the current anti big tech sentiment in the eu is going (where big tech is roughly the same as silicon valley, which is roughly the same as the u.s.a.)? just like some design studios had (or have?) some of these chunky mac pros standing around, teams hoping to use ai professionally for whatever perceived benefit may use these models locally on dedicated machines? that's still very different to traditional desktop usage, and of course wildly speculative
(DIR) Post #ArzA2KdGZUU8JP2dgO by virtuous_sloth@cosocial.ca
0 likes, 0 repeats
@mntmn It feels like AMD paid Framework to develop this with enough upside that Framework felt it was worth distracting it from it's core mission. ?In that view, the sales pitch is AMD's not Frameworks, at least directly.
(DIR) Post #ArzALWwZ1SzxzrDqyW by chesterdott@fosstodon.org
0 likes, 0 repeats
@mntmn If one day we can say in our home "Hello, computer, do me this..." I'd rather it be local than in Google/Microsoft/Meta cloud servers.
(DIR) Post #ArzAqgTdcxoyXcY27M by leonardo@mastodon.bida.im
0 likes, 0 repeats
@mntmn @woodworker something like llamaindex https://docs.llamaindex.ai/en/stable/ can do that, running locally on your machine. In my experience it can be a useful tool in case of large and disomogeneus sets of documents, providing you useful answers context-aware
(DIR) Post #ArzBlSllGTqAXA6D44 by tor@norden.social
0 likes, 0 repeats
@mntmn The interesting question would be: what is the performance of the #Framework cluster. This is not shown in the video. In another video someone tested #deepseek on a server with more than 1TB RAM. Spoiler: the performance is not really usable with local LLM in RAM (only 3 tokens per second).https://youtu.be/A8N3zKUJ0yE?t=905
(DIR) Post #ArzFZeRuPhAX6FR8DY by mntmn@mastodon.social
0 likes, 0 repeats
@jevans oh yeah, whisper for voice recognition/transcription is an ML usecase that i get. but that doesn't require a lot of CPU/GPU power afaik?
(DIR) Post #ArzNrXs6PGMzXtjQxc by etam@im-in.space
0 likes, 0 repeats
@mntmn"The product is hype, the customers are investors" (quote from @rysiek )
(DIR) Post #ArzQ1knBQFcyQGQV2O by markstos@urbanists.social
0 likes, 0 repeats
@mntmn Everyone doesnât need a Mac Studio Ultra either, yet it exists.
(DIR) Post #ArzQPf8M718rp5jKnA by mntmn@mastodon.social
0 likes, 0 repeats
@markstos idk when i think about mac my associations are design, music production, video editing at high res, that's stuff i understand better
(DIR) Post #ArzVIBvompC6zv7fsG by markstos@urbanists.social
0 likes, 0 repeats
@mntmn But in 2024 itâs required to market how good your hardware is at AI. đ
(DIR) Post #ArzX0fw6guDake27ua by mntn@mastodon.sdf.org
0 likes, 0 repeats
@mntmn Iâm vaguely interested in running local models to experiment with contextual search and summarization (I have a massive PDF library), but not interested enough to finish the setup process. Way down the list of priorities.On the other hand, looking at that thumbnail⌠when will we see the first MNT 10 inch mini rack server? ;)
(DIR) Post #ArzcPOXszAufpFJJtg by mntn@mastodon.sdf.org
0 likes, 0 repeats
@mntmn @q66 Itâs not as niche as youâd expect, itâs fairly popular with younger people exploring development and computers. I do expect the popularity to fade quickly just like cryptocurrencies did. (Cryptocurrency is still popular in investment circles but I feel like itâs become a small niche among developers and tech enthusiasts.)
(DIR) Post #ArzvtcOPxrYCaZ0VFo by stylus@social.afront.org
0 likes, 0 repeats
@mntmnIt's sure hard to gather actual numbers, as opposed to vibes.As parallel posters have said, running a LLM on your own machine appears to avoid some of the continuing assumed misuse of customer input by "AI" companies. (However, it does little or nothing to address the environmental costs of training LLMs, or the intellectual property problems or many other legitimate issues surrounding LLMs and other current "AI" tech)And of course wouldn't you want to be ready when the hypothetical truly open and ethically-trained LLM system drops??As anecdata, in 2022 I spent about $350 on a graphics card capable of running smaller LLMs as well as some Stable Diffusion-family image generation models. As well as presumably being better for gaming than my previous GPU, which was years old at that point.I feel like -- same as most people don't want to self-host e-mail or a website -- it's sure true that most people don't care to put the work into this. But it's entirely possible that a self-selected group of people with a relative plenty of disposable income would select a system that might run AI, probably, if they were ever sufficiently interested to make it work.
(DIR) Post #As06DRE4BwtdHwFs3c by bnys@lasersword.club
0 likes, 0 repeats
@mntmn they're a san francisco company and it seems as if they're unaware of being inside a bubble
(DIR) Post #As1KA4J0XsW7qkoUzo by molusk@piaille.fr
0 likes, 0 repeats
@mntmn wait !? They designed a non-upgradable desktop after fully customizable laptops with swappable parts !?Are they running backwards ?I really don't understand ! And I am a happy framework laptop user... I don't think their move is wise but as stated by others before maybe fedi users are in a bubble ? Or they are ?