Post At5OSZCL9jzSwDq7lI by wizzwizz4@fosstodon.org
 (DIR) More posts by wizzwizz4@fosstodon.org
 (DIR) Post #At5DQ1SqVWtqEv913w by Gina@fosstodon.org
       2025-04-14T13:32:12Z
       
       0 likes, 0 repeats
       
       Does anyone know what the #foss answer will be for Microsoft Copilot? Like a personal GPT, but less integrated?#ai #Microsoft #mscopilot #opensource
       
 (DIR) Post #At5DTN88PVvGIv3Peq by wizzwizz4@fosstodon.org
       2025-04-14T13:32:49Z
       
       0 likes, 0 repeats
       
       @Gina What do people want it for?
       
 (DIR) Post #At5DiyzluYOy5IaP0y by funz@todon.eu
       2025-04-14T13:35:35Z
       
       0 likes, 0 repeats
       
       @Gina not to use it - is the foss answer. there is no free or open source LLM.
       
 (DIR) Post #At5Dq81Wk61q71oYkq by trevdev@fosstodon.org
       2025-04-14T13:36:56Z
       
       0 likes, 0 repeats
       
       @Gina OI, organic intelligence. It's proprietary but only integrated into your own brain
       
 (DIR) Post #At5DtWWI7fPoxFp5rU by yngmar@social.tchncs.de
       2025-04-14T13:37:32Z
       
       0 likes, 0 repeats
       
       @Gina Original code, art and writing instead of sloppification is the answer.
       
 (DIR) Post #At5Eeq09VZ2SzDLr6m by randomgeek@masto.hackers.town
       2025-04-14T13:46:04Z
       
       0 likes, 0 repeats
       
       @Gina we probably won't ever see GNU/GPT, but locally hosted FOSS LLM stuff is out there via tools Ollama and LM Studio.(Answering because open and local are the only facets of LLMs that interest me, so I have a non-caustic response)As with all things open source, it takes extra work to set up and integrate with your workflow. Less so with LM Studio, which is I think only has elements available as open source.https://ollama.comhttps://lmstudio.ai
       
 (DIR) Post #At5F3VZSbamqrUd8vA by sunglocto@vis.social
       2025-04-14T13:50:33Z
       
       0 likes, 0 repeats
       
       @Gina it basically already exists in a rudimentary way. you can already get tons of open source models out there on the internet + apps for interfacing with them that are also open source. as for actually being able to integrate into your system, that will require actual effort, code, etc. since most people in the foss sphere despise llms for good reason people probably won't work on something like thatalso if people's systems are wildly different (which they are) then you would have to
       
 (DIR) Post #At5Fi4C1IjPDp60R3Q by Gina@fosstodon.org
       2025-04-14T13:57:54Z
       
       0 likes, 0 repeats
       
       @wizzwizz4 as a project manager, it's really convenient as a secretary and help me write documents (can crawl through all org documents, Teams etc)
       
 (DIR) Post #At5FjfSmDUdLhjbiPw by Gina@fosstodon.org
       2025-04-14T13:58:11Z
       
       0 likes, 0 repeats
       
       @sotolf idk I use chatgpt daily and love it
       
 (DIR) Post #At5FkwTCTGuejvPfXs by Gina@fosstodon.org
       2025-04-14T13:58:25Z
       
       0 likes, 0 repeats
       
       @trevdev yall I don't have a lot of that 😂
       
 (DIR) Post #At5J7dCaZgkjVK9bXM by catapult@masto.hackers.town
       2025-04-14T14:36:04Z
       
       0 likes, 0 repeats
       
       @Gina as long as I can CHOOSE not to use it.  Or choose a different "AI" setup.But copy Microsoft, and make me use a specific AI?  Nope.  That is exactly why I use free and open source software.Let ME choose what software I run.  Don't force me to use something I don't like.Let ME choose whether to run it or not.That's all I am asking.
       
 (DIR) Post #At5K3hLFLEjd8ftLpw by kta@hostux.social
       2025-04-14T14:46:35Z
       
       0 likes, 0 repeats
       
       @Gina there are two offerings related to this :  "LLM Twins" and "LLM CoPilots".  Devs can fetch pre-trained LLMs and fine-tune / augment them with local data. Stuff like samples of your personal writing, emails, work text messages, social media posts, etc. You basically teach the LLM to generate content that you write. You can automate it so it literally posts things to social media and responds to emails (agents), or you can manually ask it to generate content. Can ask ChatGPT about both.
       
 (DIR) Post #At5LJTzipkGAKZulO5 by mike@fosstodon.org
       2025-04-14T15:00:40Z
       
       0 likes, 0 repeats
       
       @Gina If I had to choose, I'd probably go with #Ollama (which has been mentioned several times already). It's licensed under the MIT license and the models are about as close to open source as you can get. When I play with LLMs, it's what I use. Locally run and with an API that could be used to integrate with other stuff. I also have #OpenWebUI to make things prettier. Both can run locally, though OpenWebUI can integrate with cloud LLMs too. Of course, tomorrow everything could change.
       
 (DIR) Post #At5M6OCJAvtAXd7cx6 by mike@fosstodon.org
       2025-04-14T15:09:29Z
       
       0 likes, 0 repeats
       
       @jcrabapple That's a good recommendation on the model. I've been using Gemma:12b, but that's a Google model and on my hardware it's NOT fast. It gets the job done and if you're not worried about waiting a couple minutes for a response it's great. Still, there's the Googliness of it all. I haven't tested the Mistral models yet, so I'm grabbing 7b right now. I'm giving it a spin.@Gina
       
 (DIR) Post #At5MMfgJwCQTOtvLJh by mike@fosstodon.org
       2025-04-14T15:12:22Z
       
       0 likes, 0 repeats
       
       @ton That's a pretty cool resource!@Gina @dingemansemark
       
 (DIR) Post #At5MYEbURFagtYB6pc by mike@fosstodon.org
       2025-04-14T15:14:32Z
       
       0 likes, 0 repeats
       
       @jcrabapple I like it because it's pretty reliable for my uses (which considering it's an LLM, the bar is pretty low). I really like that it's multimodal. That's a nice feature especially through OpenWebUI to ease UX.@Gina
       
 (DIR) Post #At5NpjPtEK2NKvewZE by mike@fosstodon.org
       2025-04-14T15:28:54Z
       
       0 likes, 0 repeats
       
       @jcrabapple Yea, I've tied mine in with my Home Assistant to help with "smart home" type stuff. It does a pretty good job when it's used that way (better than HA without it), but when it's used to ask random questions is super hit or miss. My kids will use it like Alexa, and in that situation it can be...... less reliable. @Gina
       
 (DIR) Post #At5OSZCL9jzSwDq7lI by wizzwizz4@fosstodon.org
       2025-04-14T15:35:57Z
       
       0 likes, 0 repeats
       
       @Gina Can you describe how you use it? It sounds like there are simpler (more useful) natural-language-processing systems you could be using, although I don't have enough details yet to make one.
       
 (DIR) Post #At5SeBsO5jaNtlIyzA by futureisfoss@fosstodon.org
       2025-04-14T16:22:52Z
       
       0 likes, 0 repeats
       
       @GinaGPT4All is a good option for running local LLMs - https://www.nomic.ai/gpt4allsource code: https://github.com/nomic-ai/gpt4all
       
 (DIR) Post #At5bBY9t4CeaC9uNou by xerxespersrex@mastodon.social
       2025-04-14T17:58:25Z
       
       0 likes, 0 repeats
       
       @Gina A formal education, or a good book, or your friends or larger community. Augmented with a search engine for quick questions.There is no ethical way to use AI in most use cases that it is advertised for. For instance, using AI as an assistant to write code strips any licensing information or attribution from the code - accepting this use of AI is saying that we can use AI to "launder" code away from its original licenses into potentially proprietary software. This is unacceptable.