Post AUDoRqsIRpDfaHSKDQ by kazaroth@mastodon.social
 (DIR) More posts by kazaroth@mastodon.social
 (DIR) Post #AUDf3FvvFEeaTULgW0 by simon@fedi.simonwillison.net
       2023-04-01T20:34:49Z
       
       0 likes, 0 repeats
       
       Made some quick notes on how to use the OpenAI Python library to make ChatGPT API calls and stream out the response tokens as they arrivehttps://til.simonwillison.net/gpt3/python-chatgpt-streaming-api
       
 (DIR) Post #AUDfN6bFeiYdJr93hY by laimis@mstdn.social
       2023-04-01T20:38:28Z
       
       0 likes, 0 repeats
       
       @simon why is there this interest in getting responses to come in bits? It's so annoying. I like how bard just drops it. Also with opoenai APIs I integrate I just take the full response and write. it out. Feels like a gimmick otherwise.
       
 (DIR) Post #AUDgCK7z7QGq2nsX1U by simon@fedi.simonwillison.net
       2023-04-01T20:47:52Z
       
       0 likes, 0 repeats
       
       @laimis I'm impatient! I like to start getting results as soon as possible -  especially when using GPT-4 which is noticeably slower than ChatGPT / GPT 3.5 at the moment
       
 (DIR) Post #AUDgSG4xwIuM7Y44UC by laimis@mstdn.social
       2023-04-01T20:50:44Z
       
       0 likes, 0 repeats
       
       @simon hehehe
       
 (DIR) Post #AUDoRqsIRpDfaHSKDQ by kazaroth@mastodon.social
       2023-04-01T22:20:19Z
       
       0 likes, 0 repeats
       
       @simon @laimis would you say it’s worth the speed/cost penalty for GPT4 in most cases? Or if not: which?
       
 (DIR) Post #AUECKGhwumZsHsX6B6 by mahmoudajawad@mastodon.online
       2023-04-02T02:47:57Z
       
       0 likes, 0 repeats
       
       @simon TIL, openai SDK has asyncio-compatible methods, and I went all the way and created my own threading for it./facepalm