[HN Gopher] My experience trying to write human-sounding article...
       ___________________________________________________________________
        
       My experience trying to write human-sounding articles using Claude
       AI
        
       Author : dv-tw
       Score  : 52 points
       Date   : 2023-11-22 17:22 UTC (5 hours ago)
        
 (HTM) web link (idratherbewriting.com)
 (TXT) w3m dump (idratherbewriting.com)
        
       | ctoth wrote:
       | I'd like to explore more the fan-out pattern:
       | 
       | - having it generate an outline
       | 
       | - have multiple clones write each section of the outline
       | 
       | - a stage which synthesizes the parallel-written sections,
       | capturing the best
       | 
       | - a stage which combines all sections and ensures flow based on
       | the original outline
       | 
       | - finally a stage which critiques and generates edits.
       | 
       | Iterate a couple times and you might actually have something
       | good!
       | 
       | Basically a lot of what this article does, but automated.
        
         | explaininjs wrote:
         | As a reader, would you ever prefer to be given the AI-fluffed
         | version instead of the outline? I say if you have a few concise
         | bullet points of the point you want to get across fantastic,
         | let me read them and be on my way.
         | 
         | If on the other hand your mission is to produce a proper
         | creative writing work where the choice of words is the art,
         | then if you don't do that yourself what's the point?
        
           | TaylorAlexander wrote:
           | I used to publish a TLDR at the top of some of my blog posts
           | because I'm so verbose!
        
           | ParetoOptimal wrote:
           | > As a reader, would you ever prefer to be given the AI-
           | fluffed version instead of the outline?
           | 
           | Why read Huckleberry Finn when you can read the cliffs notes?
           | 
           | Summarization is lossy, usually on the experiencing part.
        
             | explaininjs wrote:
             | See second paragraph.
        
             | Feathercrown wrote:
             | But having AI extend your notes includes all the loss of
             | the initial summarization, with extra AI randomness on top.
             | It can't recover the information lost in the summary,
             | that's what makes the summary lossy.
        
             | JambalayaJim wrote:
             | Add fluff is the opposite of summarization.
        
           | chankstein38 wrote:
           | This is something I've wondered for a while too. Like
           | Notion's AI has a "make longer" button.... why would I ever
           | want AI to arbitrarily fluff something up adding extra words
           | unless I was a kid writing an exam and needed 3 more pages? I
           | can't find any legitimate use for that feature.
           | 
           | EDIT: In case it's not clear, No. I would rather read the
           | shortest version possible than one fluffed up by AI to make a
           | word count. As far as creative stuff goes, I'm not sure that
           | I've seen a situation where AI made something interesting
           | enough that I'd want to read extra words from it.
        
         | ParetoOptimal wrote:
         | How do you prompt something like that?
         | 
         | At least for 7B and 13B models I've found they give the initial
         | outline and then stop following the instructions.
        
         | methyl wrote:
         | We do use similar flow in Surfer AI and confirm it actually
         | works wonders.
        
       | fredgrott wrote:
       | That implies that those with newsletter like me need to write in
       | argument form as it is way harder for AIs to emulate argument
       | writing styles and unique voices.
        
       | cloths wrote:
       | It's nice this article includes a survey of background research!
       | 
       | > Go paragraph-by-paragraph
       | 
       | The author didn't say will previous tuned paragraphs be fed into
       | Claud to generate following paragraph?
       | 
       | > balancing ideas with personal experiences results in engaging
       | content. Adding personal experiences into an essay also disguises
       | the AI-written material.
       | 
       | Now the problem is, Does AI-generated personal experience count
       | as personal experience :) ?
        
       | chankstein38 wrote:
       | This has been my experience as well with ChatGPT. Sure you can
       | tell it to write like some other random persona or something but
       | realistically it's always felt pretty obvious that something was
       | written by ChatGPT. The more I interact with it the less excited
       | I am about its writing capabilities because they always feel like
       | they're written by spam blogs or something.
        
         | xanderlewis wrote:
         | It's hardly surprising when you consider that what gives a
         | writer their distinct voice is to a large extent determined by
         | their own particular diet of others' writing, which in the case
         | of ChatGPT is... well... everything. So of course you get
         | blandness.
        
           | crooked-v wrote:
           | You might find NovelAI interesting. Their homegrown models
           | are intentionally trained to emulate different writing styles
           | [1] and genre standards.
           | 
           | [1]: https://tapwavezodiac.github.io/novelaiUKB/Directing-
           | the-Nar...
        
             | xanderlewis wrote:
             | Certainly looks interesting. But why would you want to
             | imitate other writers' styles, except for pure novelty's
             | sake? You could also train an AI to imitate yourself, given
             | enough content, but why would you? I'm not sure I fully
             | understand the motivation.
        
       | swatcoder wrote:
       | We're going to gain a ton of utility when we can let go of the
       | starry-eyed idea of LLM's as "prospective AGI agents" that should
       | be broadly capable and need to be ethically censored, and
       | revitalize the productive and practical idea of them as "text
       | completers which may be engaged conversationally"
       | 
       | The author needs to fight uphill and contort their workflow to
       | squeeze out good articles because Antrhopic (like OpenAI) are
       | caught up in the maybe-fantasy of creating AGI agents, and so
       | burden their product design and their own research/engineering
       | efforts with heavy, prescriptive training in "alignment" and
       | "ethics".
       | 
       | But use cases like Copilot had it more right before, as do apps
       | like Narrative AI. If your LLM is for generating code, it doesn't
       | need to learn that "killing" is bad and insist that processes
       | shouldn't be killed, and if it's generating story content it
       | doesn't need to learn that every output needs to resolve all
       | tension and deliver a life lesson about caring for each other.
       | 
       | These absurdities only happen because today's pack leading
       | companies are now focusing their attention on making history with
       | AGI (doubtful) instead of making products with generative systems
       | (useful).
       | 
       | And the absurdities will persist as these companies try to layer
       | products on top of the lobotomizied "agents" with GPTs or
       | characters or whatever instead of productizing the technological,
       | useful, generative layer directly.
       | 
       | Hopefully, some of the recent team shuffles at Google, Meta, and
       | Microsoft; as well as the crisis at OpenAI; hint that we're
       | starting to cast off the fantasy-laden and cult-tainted AGI
       | fetishization and are returning to the exciting engineering
       | promises of the technology that's already here.
        
         | TapWaterBandit wrote:
         | I think this is one of the upsides of the chaos at OpenAI
         | recently. It has really shined a light on how many of the
         | people most fervently obsessed with "safe-AI" really aren't
         | clearheaded or rational thinkers and are prone to making many
         | disastrous and ill-advised decisions as anyone else. This is
         | good because there is an unfortunate human tick where
         | pessimism/cynicism is equated with wisdom while optimism is
         | equated with naivety.
         | 
         | But when the pessimists and cynics show so clearly on such a
         | large scale that they aren't uniformly wise or competent, it
         | will allow more levelheaded perspectives towards LLMs and a
         | more general cautious optimism be the guiding philosophy around
         | developing these tools.
        
         | crooked-v wrote:
         | > and deliver a life lesson about caring for each other
         | 
         | Having experienced the same thing myself, I wonder why this is
         | so omnipresent in any ChatGPT output told to produce something
         | in a narrative format. Did they RLHF it on a bunch of
         | childrens' storybooks or something?
        
           | darreninthenet wrote:
           | Probably used the scripts from every 1990s US sitcom
        
       | dv-tw wrote:
       | Just to point out that I am not the original author of this
       | article. All credit goes to the original writer. I am guessing
       | the title was changed to "My experience" from what was "A
       | writer's experience" after submission. Want to give credit where
       | credit is due.
       | 
       | I found the research in this article to be really well done and
       | something I run into in my own technical writing work. I tried
       | using ChatGPT a few times to write articles, and the result was
       | less than pleasing. I find it helpful for ideating rather than
       | actually writing.
        
       ___________________________________________________________________
       (page generated 2023-11-22 23:01 UTC)