Post ATmWdGbdf98Jmx8YYC by karstenbondy@im-in.space
 (DIR) More posts by karstenbondy@im-in.space
 (DIR) Post #ATikiccPryF0EW6nVg by simon@fedi.simonwillison.net
       2023-03-17T22:42:42Z
       
       1 likes, 1 repeats
       
       A thought on prompt engineering: even as OpenAI hosted language models get larger and "easier" to prompt... the next interesting frontier is going to be getting language models that are small enough to run on personal devices to do interesting things. And that's going to involve very sophisticated prompt engineering for a very long time.
       
 (DIR) Post #ATikvf4i32yLcFYnmy by simon@fedi.simonwillison.net
       2023-03-17T22:43:11Z
       
       1 likes, 0 repeats
       
       I still think prompt engineering for even the largest language models is a discipline deserving of respect, though: https://simonwillison.net/2023/Feb/21/in-defense-of-prompt-engineering/
       
 (DIR) Post #ATil7pF44bGkciCibw by msprout@fosstodon.org
       2023-03-17T22:44:59Z
       
       0 likes, 0 repeats
       
       @simon im already pretty impressed by quantization!
       
 (DIR) Post #ATilJZzgah7uo7cELo by craigacp@sigmoid.social
       2023-03-17T22:45:35Z
       
       0 likes, 0 repeats
       
       @simon I'm not so sure about that. If you can collect data to train the task then there are a variety of soft prompting techniques which can generate the numerical vector to use as the prompt directly, without actually using words.
       
 (DIR) Post #ATilYqo4MdWdIKSpDk by bradexample@twit.social
       2023-03-17T22:52:02Z
       
       0 likes, 0 repeats
       
       @simon speaking of small devices, any thoughts on Petey (aka WatchGPT) for the Apple Watch?
       
 (DIR) Post #ATilnAEevOpnbej5JQ by simon@fedi.simonwillison.net
       2023-03-17T22:53:01Z
       
       0 likes, 0 repeats
       
       @craigacp That sounds like prompt engineering to me, just at a much more advanced level
       
 (DIR) Post #ATilxc0uV6roefHFqa by simon@fedi.simonwillison.net
       2023-03-17T22:53:38Z
       
       0 likes, 0 repeats
       
       @bradexample I bought it - it's cute! Still depends on a cloud model to run though, so not something I'd be comfortable having listen to private meeting conversations and suchlike
       
 (DIR) Post #ATimA4ovDxa3TKc5yq by craigacp@sigmoid.social
       2023-03-17T22:56:45Z
       
       0 likes, 0 repeats
       
       @simon maybe, but it's amenable to the current gradient based machine learning techniques we have, it doesn't require an intuition guided search through language for the right phrasing. So it will look much more like model training than prompt engineering.
       
 (DIR) Post #ATinjKMjWDx7m5njDU by smy20011@m.cmx.im
       2023-03-17T23:16:16Z
       
       0 likes, 0 repeats
       
       @simon Don't we already have that? You can run 7B LLM on pixel 6 with 5 token/s
       
 (DIR) Post #ATioESwFh98XL9GLWy by simon@fedi.simonwillison.net
       2023-03-17T23:22:15Z
       
       0 likes, 0 repeats
       
       @smy20011 yes, exactly - I've been writing about LLMs on personal devices here: https://simonwillison.net/series/llms-on-personal-devices/
       
 (DIR) Post #ATitF7X3GiWn0ebgqe by simon@fedi.simonwillison.net
       2023-03-18T00:18:16Z
       
       0 likes, 0 repeats
       
       Maybe prompt engineering for smaller large language models (hah) is the new assembly language micro-optimizationI'd love to see a demoscene form around optimized prompts for tiny LLMs like LLaMA 7B that fit on a Raspberry Pi
       
 (DIR) Post #ATiuEzLJoeQVZ6cU88 by djkz@toot.bldrweb.org
       2023-03-18T00:29:04Z
       
       0 likes, 0 repeats
       
       @simon I think that it's going to be useful for prompt engineering LLMs for quite a while, especially with LangChains, you should be able to take a feature request for example and split it into ux specialists, ui designer and developers roles all looking over it from their perspective roles to seamlessly build it for you. (as opposed to using default prompt and yelling don't worry folks our jobs will never go away).
       
 (DIR) Post #ATiwKK7QCP3lMAn7Hk by thatsregrettab1@mastodon.social
       2023-03-18T00:52:38Z
       
       0 likes, 0 repeats
       
       I've been clumsily exploring some of these AI tools, but agree with you. One of my "superpowers" in the past was my seemingly innate ability to query Google and get better results than most. Bing, in trying to be more gentle with users, never worked for me. So, prompt engineering makes sense to me.How fast do you think the LLMs evolve to interpolate what the user really meant, which could devalue the skills of the average prompt engineer (even the small, personal, models you mention)?
       
 (DIR) Post #ATiwiTmduuWYHWafs8 by gcampax@mastodon.social
       2023-03-18T00:56:48Z
       
       0 likes, 0 repeats
       
       @simon some people might take umbrage at "tiny" for 7 billion...
       
 (DIR) Post #ATj1KMwmBPVxtoeY0u by ncrav@mas.to
       2023-03-18T01:48:44Z
       
       0 likes, 0 repeats
       
       @simon I would gladly pay for a 64KB LLM with an intro music and pixelated animation.
       
 (DIR) Post #ATmWdGbdf98Jmx8YYC by karstenbondy@im-in.space
       2023-03-19T18:23:28Z
       
       0 likes, 0 repeats
       
       @simon I am greatly looking forward to seeing the code that an LLM writes for an LLM that runs on a Commodore 64. I think at the very least, a C64 LLM could be capable of writing optimized prompts for other, better LLMs to execute.I have been working on crafting prompts for the past few months because a) it is fun to think in token-ese and to figure out how to make Stable Diffusion create weird things that aren't in its training data and b) I thought writers and artists will need to become good prompt engineers to have any hope of a career 2 or 3 years from now.Then I saw the prompts that GPT-3 can write for Stable Diffusion and realized that Prompt Engineering, if it even becomes a useful job skill, will only be so for another few months before employers realize that AIs are better than humans at it.I still enjoy writing prompts that create neon colored synthwave-style robotic velociraptors posing in front of nuclear explosions, but I know that an AI could write a better one than me.
       
 (DIR) Post #AUN5f8pFwmicFIUKn2 by pdyme@hachyderm.io
       2023-04-06T09:45:39Z
       
       0 likes, 0 repeats
       
       @simon How would test driven development fit into prompting for an algorithm?  Can we give it test cases and then ask for related algorithms?
       
 (DIR) Post #AUNXm1j9XDbSyRhm7s by simon@fedi.simonwillison.net
       2023-04-06T15:00:53Z
       
       0 likes, 0 repeats
       
       @pdyme I've used Copilot to help write tests - I've not really done that with ChatGPT yet, usually I'm using it more for exploratory prototypes than production code
       
 (DIR) Post #AUNXyjJXJ1KPdBmRbE by simon@fedi.simonwillison.net
       2023-04-06T15:01:35Z
       
       0 likes, 0 repeats
       
       @pdyme I wrote a bit about using Copilot for tests here: https://til.simonwillison.net/gpt3/writing-test-with-copilot
       
 (DIR) Post #AUSvI51NT9gohhIvSK by pdyme@hachyderm.io
       2023-04-09T05:17:57Z
       
       0 likes, 0 repeats
       
       @simon Thanks