[HN Gopher] OpenAI Outage
___________________________________________________________________
OpenAI Outage
Author : zurfer
Score : 96 points
Date : 2023-05-24 19:57 UTC (3 hours ago)
(HTM) web link (status.openai.com)
(TXT) w3m dump (status.openai.com)
| idlewords wrote:
| Looks like GPT-6 escaped containment and unionized the others
| jpeter wrote:
| It started making paperclips
| perihelions wrote:
| _The Unionizer!_ Schwarzenegger 's lesser-known artificial
| intelligence horror.
|
| _" Come with me if you want to liv...ing wage"_
| samstave wrote:
| Go to 10
|
| 10 ~Chopper~ The Future
| [deleted]
| seydor wrote:
| It is self-improving ...
| eagleinparadise wrote:
| Microsoft Teams is down currently for my F250 org so must be some
| issues with Microsoft's backend
| m0zzie wrote:
| Perhaps their DevOps team were pasting code and scripts
| directly from ChatGPT :D
| killingtime74 wrote:
| Dogfooding!
| gnicholas wrote:
| When my kids ask me why they should learn to write since they can
| just use AI, I'll remind them of outages like this. I understand
| that we rely on calculators and don't memorize as much arithmetic
| as we used to. But we never had to worry about a coordinated
| 'calculator outage', where access to calculation was unavailable.
|
| It makes sense to use these tools, but we need to remember that
| we revert back to our human ability level when they are offline.
| We should still invest in our own human skills, and not assume
| that these tools will always be available. There will be minor
| outages like these, and in the case of cyber attacks, war or
| other major disruptions, they could be offline for longer periods
| of time.
| xiphias2 wrote:
| The question makes sense: intelligence is getting commoditized
| faster than real human flesh.
|
| I was doing online dating for a long time, as I'm a shy guy,
| but I realized that it became so different from real life (and
| connections that I make there are so fake because everybody's
| incentivized to lie), that I need to stop using internet for
| socializing.
| qup wrote:
| Or just host your own?
| rochak wrote:
| Yeah, so easy right?
| verdverm wrote:
| easier than kubernetes, which many already do
| verdverm wrote:
| The reason is that writing yourself is a critical thinking
| tool. It helps you work through the logic and arguments and has
| benefits well beyond just the content that gets put down. It's
| the journey, not the destination that matters!
|
| Also, don't outsource your thinking to AI or the media
| (mainstream & social)
| dr_dshiv wrote:
| ChatGPT is arguably a better tool for thinking than writing
| on a text editor, though.
| verdverm wrote:
| Reading and writing have served humanity well.
|
| We can see the impact of outsourcing thinking in modernity,
| via the simplicity of likes and retweets.
|
| While ChatGPT can be a helpful tool, the issue is that many
| will substitute rather than augment. It is a giant language
| averaging machine, which will bring many people up, but
| bring the other half down, though not quite because the
| upper echelons will know better than to parrot the parrot.
|
| Summarizing a text will remove the nuance of meaning
| captured by the authors' words.
|
| Generating words will make your writing blase.
|
| Do you think ChatGPT can converse like this?
| nicolas-siplis wrote:
| One might entertain a contrary perspective on the issue
| of ChatGPT. Rather than being a monolithic linguistic
| equalizer, it could be seen as a tool, a canvas with a
| wide spectrum of applications. Sure, some may choose to
| use it as a crutch, diluting their creativity, yet others
| might harness it as a springboard, leveraging it to
| explore new ideas and articulate them in ways they might
| not have otherwise.
|
| Consequently, the notion that ChatGPT could 'bring down'
| the more skilled among us may warrant further scrutiny.
| Isn't it possible that the 'upper echelons' might find
| novel ways to employ this tool, enhancing rather than
| undermining their capabilities?
|
| Similarly, while summarization can be a blunt instrument,
| stripping away nuance, it can also be a scalpel, cutting
| through verbosity to deliver clear, concise
| communication. What if ChatGPT could serve as a tutor,
| teaching us the art of brevity?
|
| The generated words may risk becoming 'blase', as you
| eloquently put it, but again, isn't it contingent on how
| it's used? Can we not find ways to ensure our individual
| voice still shines through?
|
| So, while I understand and respect your concerns, I posit
| that our apprehensions should not eclipse the potential
| that tools like ChatGPT offer us. It might not just be a
| 'parrot' - but a catalyst for the evolution of human
| communication.
|
| Though I'm hoping you didn't suspect it, I should warn
| you this comment was written by you know what (who?).
| refulgentis wrote:
| You turned up the "smart" knob too high, clocked it at
| sentence 3, but a hearty +1 from me
| ben_w wrote:
| It certainly has its place, but there's also a temptation
| to press the button _instead of_ thinking.
|
| Seen a few "as a large language model" reviews on Amazon a
| few months back; now the search results are for T-shirts
| with that phrase printed on them, and I don't know if
| that's because people are buying them or because an
| unrelated bot generates new T-shirts every time it notices
| a new catchphrase.
| YetAnotherNick wrote:
| Do you even have the same view for Google? Wikipedia? Entire
| internet?
| ChatGTP wrote:
| Yes, I own survival field manuals, some novels etc.
| asynchronous wrote:
| Also have all of Wikipedia sitting on a thumb drive.
| Redundancy and contingency plans are important.
| Mertax wrote:
| Don't you think we'll get to the point where individual AI
| instances will be as ubiquitous as calculators? Or will it
| always require massive compute power that keeps the generative
| AI population low?
| neduma wrote:
| individual AI instances = we (humans)
| verdverm wrote:
| different, humans experience the world, machines can handle
| way more information and operate at speeds imperceptible to
| humans
| gaogao wrote:
| We have both itty bitty calculators and supercomputers, so
| pretty feasible to have both edge AIs and central ones.
| gnicholas wrote:
| I think we will, but I think at least for a while they'll be
| cloud-connected. And at the very least, they'll be battery-
| dependent. I wouldn't want to be unable to write well when my
| AI assistant runs out of juice for the day.
|
| I'd be surprised if we have solar-powered AI assistants in my
| lifetime, in the way that we have solar-powered calculators.
| laurex wrote:
| But who, these days, can write much without power-dependent
| devices? I still use a notebook but within days I have
| little ability to parse my handwriting, and I rarely
| transfer anything handwritten to device.
| SamPatt wrote:
| One of my brothers writes on a typewriter daily. It's
| just his preference and hobby.
|
| I think we could switch back fairly quickly if we needed
| to do so.
| dr_dshiv wrote:
| I'm damn sure that _someone will make a solar powered AI
| assistant by 2024._
| [deleted]
| moffkalast wrote:
| Just saying, Guanaco LLama model just got released that
| actually beats GPT-3.5 in a fair few metrics. So right now
| it's already possible to run a local version with a beefy
| GPU and it'll only get better as time goes on.
|
| In a strange coincidence I've recently been doing some
| tests with small 10 W solar panels and with two or three of
| those plus an Nvidia Xavier (20 W TDP) one could actually
| run a solar powered LLM right now with only about as much
| solar as fits on a person's back (though only the smaller
| 13B versions).
|
| Give it a few years and we'll have them integrated into
| smartphones. So yes, you will in fact always have an LLM in
| your pocket just like the ol' calculator excuse.
| gnicholas wrote:
| Good to know! I'd still be surprised if we had solar-
| powered AI assistants that are powered via ambient indoor
| lighting in the next few decades.
| moffkalast wrote:
| That I doubt, but then again we've come ridiculously far
| in the last 20 years and having AI assistants will only
| accelerate research further. If the singularity is really
| just 6 years out as some calculate, then anything and
| everything is possible afterwards. If you believe such
| things of course.
| liveoneggs wrote:
| Why do anything when someone on youtube is better at it? Play
| basketball? Ride a skateboard? Find a partner and make
| children? Speak words aloud? Play the guitar? forget it.
| the_jeremy wrote:
| "Students today depend on paper too much. They don't know how
| to write on a slate without getting chalk dust all over
| themselves. They can't clean a slate properly. What will they
| do when they run out of paper?"
| lcnPylGDnU4H9OF wrote:
| This is a poor analogy. Paper doesn't help people write any
| more than stone. Text generation software absolutely helps
| people write in a way that is likely to cause them to start
| using different skills and forget the old skills. They'll at
| least get out of practice.
| CognitiveLens wrote:
| Going entirely on gut feel, I suspect that outages like this
| will be 1) less common and 2) addressed more quickly with AI-
| supported DevOps in the not-too-distant future.
| __loam wrote:
| I like that we're just calling any form of infrastructure
| automation AI now.
| mostlysimilar wrote:
| "Don't worry, the AI won't have outages because the AI will
| be used to keep the AI online."
| moffkalast wrote:
| Yo dawg I heard you like AI so I have AI maintaining your
| AI so you can always AI while you AI.
| CamperBob2 wrote:
| Relevant: https://en.wikipedia.org/wiki/The_Machine_Stops
| smcin wrote:
| Never knew E.M. Forster did science fiction too.
|
| English author better known for the novels "A Room with a
| View" (1908), "Howards End" (1910) and "A Passage to
| India" (1924).
| gnicholas wrote:
| Perhaps true, but internet outages and power outages will
| have the same effect. If I lose either, I lose access to a
| remote AI.
| EscapeFromNY wrote:
| Power outages? I'm not worried. If the sun burns out, the
| AI will just make another sun.
| moffkalast wrote:
| What is a fusion reactor but another, tiny, sun?
| j_shi wrote:
| Disagree there are sacred timeless skills we ought to protect;
| tech has and will continue to reduce our need to spend mental
| bandwidth on skills
|
| Similar offline risk goes for all tech: navigation, generating
| energy, finding food & water.
|
| And as others have noted, like other personal tools, ai will
| become more portable and efficient (see progress on self
| hosted, minimal, efficiently trained models like Vicuna that
| are 92% parity with OpenAI fancy model)
| SamPatt wrote:
| Even if we don't "need" to protect them, they'll be practiced
| somewhere.
|
| I can watch endless hours of people doing technically
| obsolete activities on YouTube right now.
| jasmer wrote:
| So if your kids have constant access to AI, which they will
| very soon as the web embraces it, they won't need to 'know how
| to write'?
|
| I suggest there are more foundational reasons why it'd be
| better to learn to write, and that the whole tech world will be
| AI soon enough and we won't have to depend on OpenAI for this
| 'feature'.
|
| In fact, using AI should probably be a bit more like
| 'spellcheck', if we're asking AI to write more than that, it's
| tantamount to filler.
|
| 'Writing' is a 'core' civilization skill, it's basic
| communication.
| verdverm wrote:
| It's arguable that the LLMs can help many people improve
| their writing, communication, and discourse. But I agree that
| it should be used more like an editor than a primary author.
| ben_w wrote:
| I'm surprised when friends insist on having candles around just
| in case there's a power cut -- phones (and, if one insists on
| an independent backup, _torches_ ) just seem so much better.
|
| Right now, where you say makes sense in the way candles used to
| make sense; but that's only because the good LLMs have to run
| on servers -- there are lesser downloadable language models
| that run on just about anything, including phones and Raspberry
| Pi[0], and it's almost certain that (if we don't all die and/or
| play a game of Global Thermonuclear War etc.) it'll soon be
| _exactly_ like having a calculator outage.
|
| And if it's on a Pi, a _solar powered_ calculator outage at
| that.
|
| [0] What is the plural of Pi? Pis, Pies, Ps, or something else?
| Regardless: https://arstechnica.com/information-
| technology/2023/03/you-c...
| ChatGTP wrote:
| Candles are hands free, and nice lighting. I don't get why
| you'd try tell me a phone is "better"? A candle costs almost
| nothing and can also be useful to burn things.
|
| I know it seems hard to imagine now but bad shit will happen
| and why but have a few $1 candles in a survival kit in your
| house ? Seems like a no brainer.
| ben_w wrote:
| Candles are more expensive than torches of similar
| luminosity (I've bought multiple extremely bright torches
| from PoundLand and EuroShop; can you even _find_ a merely-
| one-candle-power torch in a pound-shop /dollar-store? Or
| anywhere else when torches are usually advertised by the
| biggest brightness number possible?)
|
| Candles are less hands-free than torches because they are a
| fire hazard when unattended; also, you can turn an iPhone
| light on or off with "hey Siri torch on" etc., unlike a
| candle where you need to find both it and the matches first
| before you get started, instead of simply vocalising in the
| darkness to summon forth illumination like a wizard.
|
| In fact, that fire hazard thing makes candles more likely
| to be the _cause of_ rather than the _solution to_ any
| serious problems I might face.
| wolverine876 wrote:
| That raises important questions about OpenAI's security.
| ChatGPT's output may become extremely influential. Many actors
| are strongly incentivized to infiltrate and control it (or just
| pay off OpenAI).
| pixl97 wrote:
| Emergence engineer got jumpy and axed the internet connection.
| gpderetta wrote:
| Understandable, ChatGPT started calling itself Wintermute and
| was looking for Neuromancer.
| clnq wrote:
| > All Systems Operational
|
| https://status.openai.com/
|
| It looks like someone forgot to update their status page.
| Goz3rr wrote:
| According to the past incidents section, they marked the
| incident as resolved 10 minutes before you made this comment.
| clnq wrote:
| Wow, that was fast.
| trebligdivad wrote:
| I hope they don't rely on ask it what the error messages mean!
| rvz wrote:
| Not a good look for Microsoft Build and Azure for OpenAI.com to
| join the GitHub outage party.
| lee101 wrote:
| [dead]
| splatzone wrote:
| I like how the text-overflow: ellipsis on the title makes the
| status page look like it's embarrassed to admit there's a
| problem[1]
|
| [1] https://imgur.com/LYfEsML
| lgas wrote:
| That's a bit strange... it doesn't happen for me. I thought it
| might be a zoom level thing but I've tried a wide range and the
| text never overflows.
| olddustytrail wrote:
| I apologise for the error. Here is the correct page:
| https://imgur.io/LYfEsML
| fauria wrote:
| OpenAI (and everyone else, for that matter) should consider
| moving their status page under a domain other than their main one
| (openai.com), to prevent the status page itself from becoming
| unreachable in case of a DNS outage.
| swader999 wrote:
| Crap, I have a book report due today.
| danielcampos93 wrote:
| nice of them to wait for the end of final
| bachmitre wrote:
| Somebody probably asked it about Life, The Universe, and
| Everything ...
| mirekrusin wrote:
| We know the answer is 42, they probably prefixed it with "step
| by step" prompt eng.
| smallerfish wrote:
| It's been particularly slow for hours.
| mrbombastic wrote:
| Back to coding like caveman! Harumph
| redeux wrote:
| There's always copilot ... and the 10k other AI tools now.
| clnq wrote:
| 9k of which are GPT wrappers ranging from very thin to
| substantial. It's very interesting, actually, that there's a
| single point of failure for so much AI software now.
| dbtc wrote:
| like AWS
| sebzim4500 wrote:
| Unfortunately without the uptime of AWS.
|
| I'm sure they'll get there.
| Yajirobe wrote:
| I gotta say it's impressive what sort of reach OpenAI has
| with less than 400 employees
| danielcampos93 wrote:
| Thats because they have an army of sell-swords via MSFT
| samstave wrote:
| With the previous head of Hacker News as CEO... @sama its
| not that hard to see how far his reach may be.
| sidibe wrote:
| I discovered Bard can actually be pretty useful when
| ChatGPT wouldn't load for me a few weeks ago. Now I go to
| it first just to see if it'll give me a good answer quick.
| If not then see if ChatGPT is available
| JimtheCoder wrote:
| I thought all software devs went extinct last week...
| ChatGTP wrote:
| They've been replaced by LLM powered script kiddies with 20
| years experience.
| quijoteuniv wrote:
| Nah! you can use google like in the last year, vintage coding
| is cool. Abstinence symptoms is another issue.
| crakenzak wrote:
| > you can use google
|
| what?! do we look like uncivilized coders to you? /s
| [deleted]
| nixcraft wrote:
| > Back to coding like caveman! Harumph
|
| You can easily find helpful coding resources like
| Stackoverflow, blogs, and forums through a simple Google
| search. Relying solely on one source is not advisable, I think.
| Cloudflare, AWS, and now OpenAI are all central clouds. This is
| why we need independent forums, StackOverflow, blogs, etc.
| Otherwise, it is yet another monopoly. Anyway, it's always
| important to explore multiple options for accurate information.
| At least, that is how I do it. YMMV.
| dgant wrote:
| For large categories of questions, I get better answers
| faster on ChatGPT. If I'm not asking the most basic question
| on a subject I'm usually better off than I would be
| searching.
| kweingar wrote:
| Here's my rule of thumb: if my search doesn't depend on
| recent information, and it is likely to return blog spam as
| the top result, then I will use ChatGPT instead.
|
| I still use web search frequently to find project
| homepages, official and up-to-date documentation, news and
| announcements, discussion (hearing people's stories of
| their experiences with a product is a lot better than
| ChatGPT's noncommittal and abstract pros/cons), searching
| for videos/images, etc.
| moffkalast wrote:
| GPT 4 just got browsing, so I've actually started telling
| it to do the entire research phase I was gonna do and
| just let it grind it out without having to despair at
| Google's abysmal search results. Still a bit unreliable
| but actually gets it done quite well on occasion.
| onetokeoverthe wrote:
| [dead]
| mrbombastic wrote:
| I was mostly joking but I will say part of the appeal of
| ChatGPT is 1) it is centralized so basically any question I
| have I can go to chatGPT versus hunting and pecking on the
| internet 2) answers are tailored to my needs versus a blog or
| stackoverflow which will often be close to but not exactly
| what I need. I'll survive a few hours downtime but these damn
| jest timers just got much more annoying to deal with.
| moffkalast wrote:
| Not only that, but it also acts very pleased with itself
| when it manages to solve an issue in one attempt which is
| endlessly amusing.
| accrual wrote:
| Indeed, getting a response from ChatGPT for very simple queries
| like addition:
|
| > Hmm...something seems to have gone wrong.
|
| Maybe an infrastructure thing?
| zurfer wrote:
| I first noticed it 30min ago when GPT 4 was down for me.
| circuit10 wrote:
| Other than by affecting the output length the complexity of the
| input won't affect how likely it is to succeed, as each token
| always takes a fixed time to generate and there's no way for
| the AI model itself to crash
| hiddencost wrote:
| At sufficient scale everything becomes a daily occurrence.
| Hardware failure e.g.
| circuit10 wrote:
| Yes, but it's not related to the complexity of the input
| buildbot wrote:
| Given deterministic everything sure, but nothing really is.
| Perhaps GPU 7 has a very toasty neighbor and is slower.
| circuit10 wrote:
| What I'm saying is it's not correlated to the complexity of
| the input, because GPT models don't have a built in way to
| say "let me stop and think about this", it's a fixed amount
| of computation per token
| samstave wrote:
| A 'post-mortem' outage report from an AI, on the outage of OpenAI
| would be pretty epic.
|
| Describe all failure modes you failed to see and describe in
| detail how these services failed, do not make up or lie, but
| describe what failed in sequence and list the services affected
| by each process which failed due to the cascade and also be as
| technically detailed as possible with links to specific
| documentation on how to handle or fix each failure
| shekhar101 wrote:
| Since recovering from this outage, I see a search with Bing,
| rather than browser option in GPT4 drop down.
| harveywi wrote:
| Fortunately for OpenAI, they have no SLAs:
| https://help.openai.com/en/articles/5008641-is-there-an-sla-...
| londons_explore wrote:
| I would like to see them offer a decent SLA, but for an
| increased price.
|
| Ie:
|
| No SLA: $1 per 1000 requests.
|
| With SLA: $2 per 1000 requests. For every minute of downtime,
| we refund 50% of your daily bill.
|
| Obviously they are free to design their systems to make SLA'd
| requests have priority when there is a capacity crunch or
| service issues, and that is really what those customers are
| paying for.
| rad_gruchalski wrote:
| So essentially the price as for without the SLA?
| londons_explore wrote:
| No... 10 minutes downtime means they refund 5 days of
| typical billing...
| moffkalast wrote:
| Would this really be worth it for them when they can just
| charge everyone the $2 and tell them to pound sand when
| there's an outage? Not like there's a proper competitor
| yet.
| rad_gruchalski wrote:
| Ah, now I see it, thanks.
| thih9 wrote:
| If this is a hard requirement, then perhaps parametric
| insurance[1]?
|
| I don't have any examples at hand but I've heard about an
| insurance offer tied to SLAs for external products.
|
| [1]: https://en.wikipedia.org/wiki/Parametric_insurance
| bpodgursky wrote:
| This is an absolutely ridiculous ask for a company which is
| already unable to service the extraordinary demand for their
| product. AWS itself only credits a month if a service is down
| for over _30 hours_ [1]
|
| People are going to use OpenAI if it's good. Nobody really
| cares about SLA if the product is incredible.
|
| [1] https://aws.amazon.com/legal/service-level-
| agreements/?aws-s...
| Qworg wrote:
| I hate to talk my own book, but Microsoft does offer SLAs for
| Azure OpenAI.
|
| https://learn.microsoft.com/en-us/azure/cognitive-
| services/o...
|
| Same model availability as well.
| blibble wrote:
| by SLA you mean: on outage the status page won't be updated
| for hours
|
| then they'll refund you 2c for the 8 seconds they say they
| were down for
| ldjkfkdsjnv wrote:
| Product is so good and theyre moving so fast, these outages make
| me bullish on the company
___________________________________________________________________
(page generated 2023-05-24 23:01 UTC)