[HN Gopher] Using GPT-3 for plain language incident root cause f...
___________________________________________________________________
Using GPT-3 for plain language incident root cause from logs
Author : stochastimus
Score : 48 points
Date : 2021-01-12 17:08 UTC (5 hours ago)
(HTM) web link (www.zebrium.com)
(TXT) w3m dump (www.zebrium.com)
| bbu wrote:
| This is pretty cool! However, these two samples are very simple
| to solve. I'd love an "AI" to find root causes for problems that
| are not obvious. Just throw the whole log collection at it and
| let it solve all the issues. One can dream ;)
| m463 wrote:
| GPT-3-stackoverflow
| deeeeplearning wrote:
| But seriously, was Stackoverflow part of the training data
| used to train GPT-3? Would definitely be an interesting fine
| tuning experiment
| stochastimus wrote:
| From what I've read, the answer is "yes", stackoverflow was
| crawled. EDIT: I looked and stackoverflow is included in
| the Common Crawl dataset, which is one of the datasets on
| which GPT-3 was trained. Having said that, it's not clear
| to me the degree of coverage of that domain guaranteed by
| that crawl... looks pretty comprehensive, though.
| http://index.commoncrawl.org/CC-
| MAIN-2020-24-index?url=*.sta...
| stochastimus wrote:
| Yeah, my experience so far has been that if I just pile a bunch
| of logs in there, if it's not given salient lines, the language
| model tends to either rat-hole on some particular detail that's
| irrelevant, or else construct a non-factual narrative. But,
| when Zebrium did the picking of these lines autonomously and
| GPT-3 summarized them, we see a meaningful summary. Having said
| that, I would also like to get GPT-3 more and more savvy with
| larger and larger log-based prompts, and I'm hoping it's
| possible with some tweaks to the prompt and some finetuning.
| I'll keep the blog posted as we do more experiments.
| a-dub wrote:
| this is really cool!
| [deleted]
| brianjunyinchan wrote:
| Super interesting. I wonder what other latent domain-specific
| intelligence GPT-3 picked up during training, that is parseable
| with text in and text out. Like a flash cards generator?
| stochastimus wrote:
| Hmm, I like this direction - so maybe, as the user is
| navigating the incident, let them steer the model with
| questions and/or additional lines. Is that sort of what you'd
| envision?
| sthatipamala wrote:
| Polar (https://getpolarized.io/) has a GPT-3 based flash card
| generator from text highlights. It's available to premium
| subscribers.
| king_magic wrote:
| I'm fairly bearish on GPT-3, but this is actually a pretty cool
| application.
| jacques_chester wrote:
| Is there a reason I'd use this approach over a process mining /
| log mining system? I feel like it needs me to guess the right
| question to get an answer.
| stochastimus wrote:
| Well, I've been trying really hard not to point it out because
| I don't want this to be like a commercial. :) But, the idea
| here is that the Zebrium ML picks the incident lines
| unsupervised; then, the GPT-3 model creates the summary
| unsupervised. So I guess the combination is what we've been
| working on in a private beta, so that the user can get the best
| of both worlds.
| jacques_chester wrote:
| Gotcha. I had understood it to be purely GPT-3 somehow,
| rather than as a second step.
| mckirk wrote:
| That's cool and all, but I'm pretty sure what we really want to
| see is
|
| "The expert described what had happened, in the form of a Haiku:"
| stochastimus wrote:
| I just tried this and I might stick with these settings! ;-)
| For the postgresql example in the blog, I used your prompt.
| Here's what I got:
|
| The logs were in a mess, But the expert could see, That the
| database was in distress.
| phaemon wrote:
| I'm kind of surprised GPT-3 doesn't "understand" haiku. You'd
| think it could extrapolate the rules?
|
| The logs are broken!, Sysadmin sweeps up the leaves, The
| database cried
| rictic wrote:
| The encoding used by GPT-2 and GPT-3 greatly obscures many
| of the textual properties of words. This at least partly
| accounts for why it has so much trouble with meter, rhyme,
| syllables, and some math.
|
| More info: https://www.gwern.net/GPT-3#bpes
| stochastimus wrote:
| Thanks for putting that info here!
___________________________________________________________________
(page generated 2021-01-12 23:00 UTC)