Posts by DavidObando@hachyderm.io
(DIR) Post #ASMX4Dq3FZTcdi4PSK by DavidObando@hachyderm.io
2023-02-05T07:33:44Z
0 likes, 0 repeats
@simon not just research but actual products are out. I was part of the team that produced Visual Studio IntelliCode line completions, think of it as GMail’s smart compose for code (C#, Python, JS/TS, C++) and runs locally in the IDE.We used a lot of compression techniques to make it product friendly as the size not only determines the memory it consumes but also the latency in providing responses. Here’s a paper by our data science team on GPT-C:https://arxiv.org/pdf/2005.08025.pdf
(DIR) Post #ASN1IEJTHPt0n3QVFY by DavidObando@hachyderm.io
2023-02-05T07:46:05Z
0 likes, 0 repeats
@simon There's a little demo of IntelliCode completions in the video here: https://devblogs.microsoft.com/visualstudio/type-less-code-more-with-intellicode-completions/This all runs in the local computer, consumes 1 GB of memory tops, and produces predictions within a reasonable threshold (varies per computer, but for modern machines it's under 60 ms).The underlying tech is similar to what you're discussing: a LLM focused on a specialized body of knowledge and compressed to fit in a local computer.
(DIR) Post #AWMjmTWjfbzBFU6R6G by DavidObando@hachyderm.io
2023-06-05T01:24:27Z
0 likes, 0 repeats
@simon this was a great read and is inspiring me to write about IntelliCode (a Visual Studio and VSCode technology), its history, data, methods of extraction, etc.