Post Ab8FlRiP9Sk4muzHPM by grueproof@fosstodon.org
(DIR) More posts by grueproof@fosstodon.org
(DIR) Post #Ab8F1y4YW2rlozf9Q8 by tek@freeradical.zone
2023-10-25T14:56:36Z
0 likes, 0 repeats
Today’s project: writing a ChatGPT conversation app.This isn’t me buying into the idea. This is me accepting that it’s not going to go away if I ignore it.
(DIR) Post #Ab8FFWuMJFPvcuMNmq by dpreacher@freeradical.zone
2023-10-25T14:59:01Z
0 likes, 0 repeats
@tek trying to do the same, except not so much a conversation app as much as "give all inputs and let AI generate one response to it"
(DIR) Post #Ab8FlRiP9Sk4muzHPM by grueproof@fosstodon.org
2023-10-25T15:04:42Z
0 likes, 0 repeats
@tek Yeah. Same.
(DIR) Post #Ab8Fp8J9FXbhktvqKm by tek@freeradical.zone
2023-10-25T15:05:29Z
0 likes, 0 repeats
@dpreacher Right on. I’m trying the conversation API where you have to send the whole prior conversation each time.
(DIR) Post #Ab8G6MSzrQ4Waikmxc by dpreacher@freeradical.zone
2023-10-25T15:08:35Z
0 likes, 0 repeats
@tek same...although i was trying to see by not sending chat_history for each next step. I have scenarios where the tokens will get exhausted possibly since i know the total of multiple user inputs and the generated output would exceed that. otoh, i can also see examples where the context will be needed, but never do i see a need for the chat history from since 1st input. exploring and tinkering to see what all can be optimized.