Post AoFCegILuJMWxWsWIq by napocornejo@masto.ai
(DIR) More posts by napocornejo@masto.ai
(DIR) Post #AoFCegILuJMWxWsWIq by napocornejo@masto.ai
2024-11-20T19:47:05Z
1 likes, 1 repeats
You should all the try the french Mistal.ai #LLM. Seems powerful.Talk to it here:https://chat.mistral.ai/chat
(DIR) Post #AoFCtIyss1iGznkHNg by Ai2ObsFjnLcY8CdUMi.KuteboiCoder@subs4social.xyz
2024-11-20T19:52:54.823Z
0 likes, 0 repeats
@napocornejo@masto.ai Yesterday I played with #phi3 #LLM - actually it's a state-of-the-art #SLM - on a cloud #GPU. It runs faster than mistral-nemo on the same GPU while still giving encyclopedic answers.I haven't compared phi3 against a mid-sized or large #Mistral #Mixtral model. When I do, it just might surprise me.@icedquinn@blob.cat
(DIR) Post #AoFgjnfKvaEHtLzC4G by napocornejo@masto.ai
2024-11-20T23:06:17Z
0 likes, 0 repeats
@KuteboiCoder @icedquinn Interesting. I wasn't aware of these #phi3 model(s).
(DIR) Post #AoFgjof1EFZMyeqSDA by icedquinn@blob.cat
2024-11-21T01:27:21.488828Z
1 likes, 1 repeats
@napocornejo @KuteboiCoder neither was i, although i don't typically pay attention to LLMs :blobfoxinnocent: i would like to finagle a text to speech model though.