[HN Gopher] Ask HN: Most successful example using LLMs in daily ...
       ___________________________________________________________________
        
       Ask HN: Most successful example using LLMs in daily work/life?
        
       Author : sabrina_ramonov
       Score  : 43 points
       Date   : 2024-05-20 20:30 UTC (2 hours ago)
        
       | camjw wrote:
       | GitHub copilot and nothing else comes close tbh.
        
         | larsenal wrote:
         | Have you tried https://cursor.sh/ at all? You still keep your
         | GH copilot, but it has a better experience IMO.
        
       | shreyarajpal wrote:
       | I get really great value in using it for brainstorming. So a
       | common workflow for me is write out a project plan and figure out
       | issues, or familiarize myself with an engineering area really
       | quickly.
        
       | pgryko wrote:
       | I use gpt4 for summarizing git diffs into commits (llama3 via
       | groq also works nicey).
       | 
       | Those then get used as part of my end of day report.
       | 
       | Example code: https://www.piotrgryko.com/posts/git-conventional-
       | commit-gpt...
        
       | mateo1 wrote:
       | I'm not a programmer, and when I write a program it's imperative
       | that it's structured right and works predictably, because I have
       | to answer for the numbers it produces. So LLMs have basically no
       | use for me on that front.
       | 
       | I don't trust any LLM to summarize articles for me as it will be
       | biased (one way or another) and it will miss the nuance of the
       | language/tone of the article, if not outright make mistakes.
       | That's another one off the table.
       | 
       | Although I don't use them much for this, I've found 2 things
       | they're good at: -Coming up with "ideas" I wouldn't come up with
       | -Summarizing hundreds (or thousands) of documents from a non-
       | standard format (ie human readable reports, legal documents) that
       | regular expressions wouldn't work with, and putting them into
       | something like a table. But still, that's only when I care about
       | searching or discovering info/patterns, not when I need a fully
       | accurate "parser".
       | 
       | I'm really surprised on how useless LLMs turned out to be for my
       | daily life to be honest. So far at least.
        
         | curtisblaine wrote:
         | How do you ask an LLM to come up with good ideas? Everytime I
         | try to use ChatGPT for idea generation, the results are subpar,
         | but maybe it's me / my prompts.
        
           | influx wrote:
           | I usually will give a bullet list of ideas I already had and
           | ask the LLM to add N more to the list, most of them will be
           | garbage, but there might be 1 that I hadn't thought of, and
           | I'll sort of recursively add that to the list, and continue
           | that until I get what I need.
        
       | vocram wrote:
       | As a non native English speaker, it's very helpful to use a LLM
       | to validate if a sentence I wrote is clear, correct, and if there
       | is a more idiomatic way to express the same thing - btw, I did
       | not do it with what I wrote here :-)
        
         | woleium wrote:
         | Your english is great!
        
       | panza wrote:
       | Copilot. I suspect a lot of us will (or already do) use it _at
       | some level_ , even if it's just autocompleting logging
       | statements, writing boiler plate/comments, suggesting
       | improvements etc.
        
       | nicklecompte wrote:
       | I tried using GPT-4 as a better way to search papers - it can be
       | very annoying when you know the gist of a result but not the
       | authors or enough details about the methodology for Google. GPT-4
       | was pretty good at figuring out what citation I wanted given a
       | vague description.
       | 
       | However, the confabulation/hallucination rate seemed highly
       | subject-dependent: AI/ML citations were quite robust, but
       | cognitive science was so bad that it wasn't worth using.
       | Eventually I went back to the Old Ways. But there are a good
       | number of academics that use it as an alternative to Google
       | Scholar.
        
       | tech_ken wrote:
       | It saves me a lot of keystrokes as a coding copilot. Pretty good
       | at detecting my usual patterns, and most of the time it can auto-
       | complete a line with either something correct or something very
       | close to correct (usually just a few small tweaks required). I
       | write a lot of SQL and it's especially good at autocompleting big
       | join clauses, which my carpals greatly appreciate.
        
       | kilroy123 wrote:
       | For me, it's when companies build a bot for their platform or
       | app.
       | 
       | Which has been trained on all this data, documentation, GitHub
       | issues, Jira, Zendesk issues, Slack messages, etc. It's a sort of
       | customer service bot that can help you code.
       | 
       | That's been the real magic that I've experienced.
        
       | Neff wrote:
       | Interpersonal Communication - My employer is a big fan of the
       | Clifton StrengthFinders school of thought, and I have found that
       | generative LLMs are really helpful in giving me other ways to
       | phrase asks to people that I tend to find difficult to
       | successfully communicate with.
       | 
       | I usually structure it like: --- My top 5 strengths in the
       | Clifton StrengthFinders system are A,B,C,D,E and I am trying to
       | effectively communicate with someone who's top five strengths are
       | R,T,[?],[?],S.
       | 
       | I need help taking the following request and reframing it in a
       | way that will be positively received by my coworker and make them
       | feel like I am not being insensitive or overly flowery.
       | 
       | The way I would phrase the request is <insert request here>.
       | 
       | Please ask any questions that would help provide more insight to
       | my coworker, other details that could resonate with them, or
       | additional background that will help the translated request be
       | received positively. ---
       | 
       | While the output is usually too verbose, it gives me a better
       | reframing of my request and has resulted in less pushback when I
       | need to get people to focus on unexpected or different
       | priorities.
        
         | vundercind wrote:
         | Have you gotten better at doing this without the LLM, maybe
         | even extemporaneously? Wondering if enough exposure to that
         | kind of modeling also serves an educational role.
        
       | HayBale wrote:
       | Text correction or generating a full sentences from scraps.
       | 
       | Like I write a super messy barely coherent paragraph and ask LLM
       | to streamline the text and make it easy to understand while
       | avoiding the LLMs grandiose language. Obviously it needs some
       | corrections but it's way faster than normal.
       | 
       | Also just to shorten a longer text or even reformat the text
       | accordingly to some direction.Like to convert daily notes to
       | proper zettelkasten ones.
        
       | ChicagoDave wrote:
       | I've been designing and developing a parser-based interactive
       | fiction (text adventure) authoring system using .NET Core/C#.
       | 
       | I started with ChatGPT and am now using Claude Opus 3.
       | 
       | For background, I've been in tech for 40 years from developer to
       | architect to director.
       | 
       | Pairing with an LLM has allowed me to iteratively learn and
       | design code significantly faster than I could otherwise. And I
       | say "design" code because that's the key difference. I prompt the
       | LLM for help with logic and capabilities and it emits code. I
       | approve the bits I like and iterate on things that are either
       | wrong or not what I expected.
       | 
       | I have many times sped up the process of going down rabbit holes
       | to test ideas when normally this would wipe out hours of wasted
       | time.
       | 
       | And LLMs are simply fantastic as learning assistants (not as a
       | teacher). You can pick up a topic like data structures and an LLM
       | can speed up your understanding of the elements and types of data
       | structures.
       | 
       | And best of all, it's always polite.
        
       | jamesponddotco wrote:
       | I use it for coding, checking grammar, improve the UX of command-
       | line applications, learning new programming languages, and a
       | bunch of other things. My wife recently decided to go back to
       | university to study translation, and Claude has been a great tool
       | for her studies too.
       | 
       | Honestly, I can't remember my life before LLMs and that is a bit
       | scary, but my productivity and overall self-esteem improved quite
       | a bit since I started using them. Heck, I don't think I'd ever
       | get into Rust if wasn't for the learning plan I got Claude to
       | write for me.
       | 
       | You can find my prompts in the llm-prompts[1] repository. Any new
       | use case I come up with ends up there--today I used it to name a
       | photography project, for example, so the prompt will end up in
       | there after dinner.
       | 
       | [1]: https://sr.ht/~jamesponddotco/llm-prompts/
        
       | semireg wrote:
       | I'm a firm believer that good enough means avoiding catastrophe.
       | Baking bread? Making beer? Caulking a window? Just avoid these
       | common mistakes the outcome will be good enough.
       | 
       | I've gotten in the habit of asking LLMs to coach me to avoid the
       | things that can go wrong.
        
       | kardos wrote:
       | It often replaces google search. Instead of sifting through heaps
       | of SEO junk and accompanying trackers,ads,popups,widgets, etc and
       | going through a search-term refinement cycle to eventually find
       | something, the LLM immediately produces a clean (ad-free, nag-
       | free, dark-pattern-free, etc) result. It generally needs to be
       | checked for correctness and has limitations in terms of
       | recentness. But avoiding the low-signal sea of crap that google
       | returns is a breath of fresh air.
        
         | Razengan wrote:
         | > _the LLM immediately produces a clean (ad-free, nag-free,
         | dark-pattern-free, etc) result_
         | 
         | For now... :smilingfacewithtear:
        
       | MountainMan1312 wrote:
       | I'm autistic and sometimes I just cannot put my brain stuff into
       | words. On a few occasions, I've just haphazardly shoved a list of
       | thoughts into ChatGPT and said "make this sound not dumb" and it
       | does just good enough. Usually I'll copy the general structure of
       | the sentence/paragraph and change it around until it sounds like
       | I wrote it.
       | 
       | I mostly do that when I need to make a complete document, because
       | I struggle with startings and endings. I like the middle.
        
       | ammar_x wrote:
       | I have Raycast extensions for GPT and Claude models. Whenever I
       | have a question, the most powerful LLMs in the world are two key
       | strokes away.
       | 
       | This way is easier than going to the browser then ChatGPT tab for
       | example then creating a new chat.
       | 
       | I found myself using LLMs more and getting more out of them
       | because of this frictionless interaction. They've become more of
       | actual "helpful assistants."
        
       | chasd00 wrote:
       | I use it to help write proposals sometimes. I can prompt it to
       | compare/constract two technology providers and that gets me
       | started writing. It's never a perfect fit but it helps get the
       | creative/sales juices flowing.
       | 
       | I also use it for searches when i know the specific documentation
       | i'm looking for has to compete with SEO spam. It's also pretty
       | good at explaining code, i've pasted in some snippets of code
       | from languages with snytax i'm not familiar with and ask it to
       | explain what's happening and it does an ok job.
       | 
       | i also like to use it for recipes like "create a recipe for
       | chicken and rice that feeds 4", "make it spicier" etc.
        
       | vundercind wrote:
       | Sub question: anyone using _local_ or at least self-hosted AI
       | systems productively? What kind of hardware does that take?
       | What's the rough cost? Do you refine the model on custom data?
       | What does _that_ part look like? (much higher hardware
       | requirements, I expect?) Which open source projects are aiding
       | your efforts?
       | 
       | All I've done is try one of those pre-packaged image generation
       | models on my M1 Air back when the first of those appeared.
        
         | codazoda wrote:
         | I don't know how productive I'm being but I'm using Llama3 via
         | Ollama on a M1 Mac. It's as good as Copilot and Gemini for most
         | things and I'll use those models if I need a little bit more. I
         | prefer the privacy of the local models. I use it both through
         | the command line and with the Open WebUI web interface. I use
         | it for programming tips, learning, research, and writing. As a
         | simple example, I wrote a (reusable) prompt for doing Chicago
         | style title capitalization a few minutes ago. Normally I'd have
         | to search for a web based tool and then manage through the
         | crap. It's much quicker to ask a local LLM.
        
       | freitzkriesler2 wrote:
       | Make a wordy email more concise, otherwise they're mostly toys.
        
       | macintux wrote:
       | I've used it for simple code suggestions when working in a
       | language I'm unfamiliar with, or testing some new (to me) corner
       | of Python.
       | 
       | I used it to help me think through what I'd need for color film
       | development in my darkroom.
       | 
       | Basically if I already have some idea of what I need, I trust it
       | to help guide me. I can evaluate its output sufficiently well.
       | 
       | If I'm learning something entirely new, where it doesn't matter a
       | great deal whether I get it right but I can test the output, it's
       | pretty useful too.
        
       | gmuslera wrote:
       | Learning. It is not passive anymore. You have a conversation, you
       | can ask why, if something different would work, how something
       | would be done without going though a lot of documentation,
       | criticism on your proposed solutions, you have all the time you
       | want, go at your own time schedule, ask about ideas you got while
       | walking, etc.
       | 
       | It may make learning more personal, your own path, and you can
       | ask if you are missing something important doing it that way.
       | 
       | And it works for most topics, for most ages, at your own pace. We
       | are entering a Diamond Age.
        
       | fxtentacle wrote:
       | None, so far. I had high hopes for copilot and JetBrains
       | Assistant, but both of them are way more verbose than my usual
       | coding style. Maybe that's just me, but I have my set of
       | libraries that I use in C++ or Go and the result is that I rarely
       | need to write much boilerplate. But I guess for that LLMs would
       | work great, if only I could trust them as much as battle-tested
       | libraries.
        
       | Aromasin wrote:
       | I live in Europe so most of my customers don't have English as a
       | first language. Any questions are generally in pretty broken
       | English. Honestly, reading through and making sense of what
       | they're trying to say is a real mental challenge at times. I use
       | LLMs to reformat and structure their message/ticket, which I
       | paste into my notes. The accuracy is pretty good - certainly as
       | good as me, although I do proof-read. I then ask it to pull out
       | the pertinent information and bullet point it. I can turn those
       | bullets into action items for me to investigate or respond to. It
       | saves me about 15 minutes on each case, meaning I save maybe an
       | hour every day in translating.
       | 
       | The next is for writing up beauracratic nonsense my organisation
       | asks me to do. Monthly status reports, bandwidth allocation,
       | deal-win summaries and the like. I write down what I've done at
       | the end of each day, so I just feed that into an LLM and ask it
       | to summarise the bulk bullet points into prose. It saves me god
       | knows how many hour refactoring documents. I modify the prose
       | when it's done, to match my personal style and storytelling
       | methodology, but it gets me the barebones draft which is the most
       | time consuming part.
       | 
       | I love LLMs personally, and am embracing them primarily as a
       | scribe and editor.
        
       | mdp2021 wrote:
       | I have been thinking for a long time that we do not have (to the
       | best of my knowledge) a good transcript formatter, and that
       | Transformers should be part of the solution - a huge wealth of
       | material is on YouTube, and its subtitles do not use punctuation.
       | 
       | I can confirm that requesting LLMs to format bare subtitles
       | adding punctuation (from commas to paragraphs, with quote marks,
       | dashes, colons etc.) can work very well.
       | 
       | It may seem a minor feature, but it is something that information
       | consumers easily benefit from (when you need to process material
       | in video format you can download the subtitles, add formatting
       | with an automation, then efficiently skim, or study, or process
       | transcripts and video together...).
        
       | mdp2021 wrote:
       | Among the topmost cases of usefulness of LLMs you should place
       | the possibility of obtaining information (or pointers to
       | information) that search engines will not return as they "do not
       | understand the question", or produce excessive noise in the
       | results...
        
       | collinvandyck76 wrote:
       | I wrote a terminal app using bubbletea that talks to openai and
       | saves conversations to a sqlitedb. i use it all the time to
       | figure out what threads to pull on for a problem with which i'm
       | unfamiliar. it has proven to be one of the biggest returns on
       | effort i've ever invested in.
        
       ___________________________________________________________________
       (page generated 2024-05-20 23:01 UTC)