[HN Gopher] Automating Interactive Fiction Logic Generation with...
___________________________________________________________________
Automating Interactive Fiction Logic Generation with LLMs in Emacs
Author : dskhatri
Score : 58 points
Date : 2025-03-31 15:57 UTC (7 hours ago)
(HTM) web link (blog.tendollaradventure.com)
(TXT) w3m dump (blog.tendollaradventure.com)
| kleiba wrote:
| What strikes me as odd in the video: why would the author not
| fill the paragraphs?!
| iLemming wrote:
| Probably because then, it would make your paragraphs 'rigid',
| whereas if you have visual-line-mode and don't truncate lines
| -- the text would just wrap around -- you only need to adjust
| the width of the window. That works nicely in "distraction-
| free" modes like writeroom-mode.
|
| I used to fill the paragraphs all the time, turns out -- it's
| really better to leave them as they are, because you can never
| get the satisfiable number of `fill-column` -- the default
| works in some cases, for others, you'd want it to be wider,
| etc.
| spudlyo wrote:
| GPTel is a very powerful interface for working with LLMs in
| Emacs. It took me a while to understand that its real value isn't
| what you get with M-x gptel, which creates a dedicated chat
| session and buffer, but rather the ability to sling prompts,
| context, and LLM output around in a native Emacs way. You can add
| to the context from dired, from a file, from a buffer, you can
| select from various prescribed system prompts for different
| functionality, you can prompt from the minibuffer, the kill-ring,
| the existing buffer, a selection, you can have the responses go
| to the minibuffer, the kill-ring, a buffer, the echo area -- it's
| extremely flexible.
|
| I have a little helper function that uses gptel-request that I
| use while reading Latin texts. It sets the system prompt so the
| LLM acts as either a Latin to English translator, or with a
| prefix argument it breaks down the grammatical structure and
| vocabulary of a sentence for me. It's very cool.
| IngoBlechschmid wrote:
| Gwern shared an idea how to exploit the strength of current-
| generation LLMs, despite their weaknesses, for "create your own
| adventure"-style fiction. https://gwern.net/cyoa Having people
| vote on AI-generated potential continuations should yield better
| results and cut costs at the same time.
|
| From the title I thought this was an implementation of Gwern's
| idea, but it's not.
| lawlessone wrote:
| >Having people vote on AI-generated potential continuations
| should
|
| So the story never ends?
| ianbicking wrote:
| There have been experiments in wiki-style cyoa generation
| (letting the public create options instead of an LLM), but they
| suffer the same problem as LLM-generated stories: aimless
| wandering and lack of consistency.
|
| (As I think about it, an LLM generation should be thought of as
| a many-author situation, as each generation comes in cold)
|
| Stories need pacing, which exists over many passages, not just
| at the choice level. And then the passages should all be based
| on a single underlying world. Both of these fall apart quickly
| without a guiding author.
|
| I think this is resolvable with LLMs and appropriate prompting,
| but the naive approach seems cool only until you actually play
| out a few stories
| zoogeny wrote:
| This is one of the most promising uses of LLMs that I have found
| in my own work. Many times I have an idea for a refactor or even
| a feature but I have this mental reluctance just due to the
| amount of code I would have to write. Like, I have this counter
| in my head on the number of key-strokes it will take to write a
| wrapper object in several places, and I hesitate.
|
| Just being able to tell an LLM "rewrite all of this code using
| this new pattern" and then dozens of code sites are correctly
| updated is a huge help. It makes me consider bigger refactoring
| or minor features that I might normally skip because I am lazy.
___________________________________________________________________
(page generated 2025-03-31 23:00 UTC)