[HN Gopher] Major outages across ChatGPT and API
___________________________________________________________________
Major outages across ChatGPT and API
Author : d99kris
Score : 381 points
Date : 2023-11-08 14:02 UTC (8 hours ago)
(HTM) web link (status.openai.com)
(TXT) w3m dump (status.openai.com)
| obiefernandez wrote:
| Dreaming of the day I run my own models and can be entirely
| responsible for my own outages.
| hospitalJail wrote:
| This came up the other day. I decided to tease everyone with an
| 'I told you so' about using some third party hosting service
| instead of the offline one I had developed years prior.
|
| The offline service was still working, and people were doing
| their job.
|
| The online service was not working, and it was causing other
| people to be unable to do their job. We had 0 control over the
| third party.
|
| The other thing, I make software and I basically don't touch it
| for a few years or ever. These third party services are always
| updating and breaking causing us to update as well.
|
| IB4 let me write my own compilers so I have real control.
| dangero wrote:
| Do you write your refrigerators firmware?
| danShumway wrote:
| If you are regularly updating your refrigerator's firmware
| or your refrigerator's firmware relies on an Internet
| connection to function, then I am very sorry to say this
| but you have lost control of your life :)
| joshuaissac wrote:
| No, but the firmware runs locally, instead of in someone
| else's cloud, so an outage in their cloud cannot take down
| my fridge.
| phyrex wrote:
| https://www.samsung.com/us/explore/family-hub-
| refrigerator/o...
| danShumway wrote:
| Nice, looks like we finally got around to inventing
| refrigerator magnets!
|
| ----
|
| That is a little bit dismissive of me though. There are
| some cool features here:
|
| I can now "entertain in my kitchen", which is definitely
| a normal thing that normal people do. I love getting
| everyone together to crowd around my refrigerator so that
| we can all watch Game of Thrones.
|
| And I can use Amazon Alexa from my fridge just in case
| I'm not able to talk out loud to the cheap unobtrusive
| device that has a microphone in it specifically so that
| it can be placed in any room of the house. So having that
| option is good.
|
| And perhaps the biggest deal of all, I can _finally_
| "shop from home." That was a huge problem for me before,
| I kept thinking, "if only I had a better refrigerator I
| could finally buy things on websites."
|
| And this is a great bargain for only 3-5 thousand
| dollars! I can't believe I was planning to buy some
| crappy normal refrigerator for less than a thousand bucks
| and then use the extra money I saved to mount a giant
| flat-screen TV hooked up to a Chromecast in my kitchen.
| That would have been a huge mistake for me to make.
|
| Honestly it's just the icing on the cake that I can "set
| as many timers as [I] want." That's a great feature for
| someone like me because I can't set any timers at all
| using my phone or a voice assistant. /s
|
| ----
|
| <serious>Holy crud, smart-device manufacturers have
| become unhinged. The one feature that actually looks
| useful here is being able to take a picture of the inside
| of the fridge while you're away. That is basically the
| one feature that I would want from a fridge that isn't
| much-better handled using a phone or a tablet or a TV or
| a normal refrigerator button. Which, great, but the
| problem is that I know what the inside of my fridge looks
| like right now, and let me just say: if I was organized
| enough that a photograph of the inside of my fridge would
| be clear enough to tell me what food was in it, and if I
| was organized enough that the photo wouldn't just show 'a
| pile of old containers, some of them transparent and some
| of them not' -- I have a feeling that in that case I
| would no longer be the type of person that needed to take
| a photo of the inside of my refrigerator to know what was
| in it.
| TeMPOraL wrote:
| Why on Earth would a _refrigerator_ need _firmware_?
| dvaletin wrote:
| To show you ads on a screen on a fancy door.
| Philpax wrote:
| You can do that today depending on what you need from the
| models.
| jstummbillig wrote:
| Well, for me that's still quite a bit more than the best ones
| provide, but I am sure we will get there.
| benterix wrote:
| Let's say I'm writing Flask code all day, and I need help
| with various parts of my code. Can I do it today or not? With
| questions like, "How to add 'Log in with Google' to the login
| screen" etc.
| capableweb wrote:
| In short: no.
|
| Longer: In theory, but it'll require a bunch of glue and
| using multiple models depending on the specific task you
| need help with. Some models are great at working with code
| but suck at literally anything else, so if you want it to
| be able to help you with "Do X with Y" you need to at least
| have two models, one that can reason up with an answer, and
| another to implement said answer.
|
| There is no general-purpose ("FOSS") LLM that even come
| close to GPT4 at this point.
| phyrex wrote:
| There's the code llama model that you can run locally I
| think which should be able to help you with that:
| https://ai.meta.com/blog/code-llama-large-language-model-
| cod...
| wokwokwok wrote:
| If you have sufficiently good hardware, the 34B code llama
| model [1] (hint: pick the quantised model you can use based
| on "Max RAM required", eg. q5/q6) running on llama.cpp [2],
| can answer many generic python and flask related questions,
| but it's not quite good enough to generate entire code
| blocks for you like gpt4.
|
| It's probably as good as you can get at the moment though;
| and hey, trying it out costs you nothing but the time it
| takes to download llama.cpp and run "make" and then point
| it at the q6 model file.
|
| So if it's no good, you've probably wasted nothing more
| than like 30 min giving it a try.
|
| [1] -
| https://huggingface.co/TheBloke/CodeLlama-34B-Instruct-GGUF
| [2] - https://github.com/ggerganov/llama.cpp
| m3kw9 wrote:
| And then you will have to endure not using a model by OpenAI
| that is 10x better than a local one
| cwkoss wrote:
| I think there is probably a threshold of usefulness, local
| LLMs are expensive to run but pretty close to it for most use
| cases now. In a couple years, our smartphones will probably
| be powerful enough to run LLMs locally that are good enough
| for 80% of uses.
| grishka wrote:
| What is so special about OpenAI's cloud hardware that one
| can't build themselves a similar server to run AI models of
| similar size?
| baq wrote:
| Nothing stopping you from buying an H100 and putting it in
| your desktop.
|
| As for me, I've got other uses for $45k.
| airspresso wrote:
| The hardware is primarily standard Nvidia GPUs (A100s,
| H100s), but the scale of the infrastructure is on another
| level entirely. These models currently need clusters of
| GPU-powered servers to make predictions fast enough. Which
| explains why OpenAI partnered with Microsoft and got
| billions in funding to spend on compute.
|
| You can run (much) smaller LLM models on consumer-grade
| GPUs though. A single Nvidia GPU with 8 GB RAM is enough to
| get started with models like Zephyr, Mistral or Llama2 in
| their smallest versions (7B parameters). But it will be
| both slower and lower quality than anything OpenAI
| currently offers.
| boh wrote:
| You can do that today. Oobabooga + Hugging Face models.
| sho wrote:
| Upgraded to "major" now. How am I supposed to get any work done!
| agumonkey wrote:
| gotta plug the good old fashion organic gpt in our heads back
| beebeepka wrote:
| Johnny GPTeptonic, apart from being hard to vocalise, would
| suggest something else entirely.
| 01acheru wrote:
| What about Johnny GPTonic?
| d99kris wrote:
| Thanks, updated the title.
| walthamstow wrote:
| I was forced to Google something earlier. It's like when you
| discover 'craft' coffee/beer/baked goods/whatever and then go
| back and try the mass market stuff. How did I ever live like
| this?
| zaphirplane wrote:
| Do a blind taste test and you know what I'll bet you'd be
| surprised
| yieldcrv wrote:
| Google used to be better
| ethanbond wrote:
| Google got worse, try Kagi and you'll realize the internet
| isn't all complete garbage.
| kweingar wrote:
| Kagi just serves Google results though. But it has some
| nifty extra features to be sure.
| ethanbond wrote:
| ... no it doesn't
|
| Do you mean that Google has the same sites in its index
| but Kagi sorts them better and removes the bullshit?
| That's just called better results.
| ynoatho wrote:
| dunkin, sam adams, twinkies. go ahead, i'll wait.
| mtkd wrote:
| phind.com internal model is working fine
| m3kw9 wrote:
| By increasing prices
| theropost wrote:
| Refactoring time! Time to see how that code you copied and
| pasted actually works
| lessbergstein wrote:
| Grok is going to be awesome
| johnhenry wrote:
| In what ways?
| rpmisms wrote:
| It'll have frequent downtime and nobody will panic when it
| does.
| lessbergstein wrote:
| Less censorship and patronizing disclaimers
| consumer451 wrote:
| What do you think will happen if it starts outputting
| opinions which put Musk, his companies, the CCP, or KSA in
| a bad light?
| scrollop wrote:
| /s
| mpalmer wrote:
| A bot filtered through the cringy, adolescent sensibilities of
| one of our most exasperating public personalities - pass.
| TheAceOfHearts wrote:
| Based on screenshots it'll have 2 modes: fun and regular. I
| think most screenshots have been "fun" mode, but it's
| probably possible to tone it down with regular.
| smrtinsert wrote:
| with access to all your real time data!
| replwoacause wrote:
| It sure doesn't look like it. The announcement was strange and
| anti-climatic, and it started making excuses for itself
| immediately "Grok is still a very early beta product - the best
| we could do with 2 months of training". It has the Elon stink
| all over it.
| lessbergstein wrote:
| Sama is a much nicer guy, right?
| mpalmer wrote:
| He doesn't have to be because he doesn't make everything
| about himself at the companies he runs. You seem to have a
| pretty skewed idea of what a CEO is for.
| lessbergstein wrote:
| Rules for thee not for me.
| drstewart wrote:
| So you don't mind if a CEO isn't a good person if they're
| not vocal about it on social media?
| GaggiX wrote:
| Well I guess this is the best time to say that HuggingFace hosts
| many open source chat models, one of my favorites is a finetuned
| version of Mistral 7B:
| https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
| pid-1 wrote:
| TIL about Spaces. Really cool stuff!
| tmountain wrote:
| I didn't know about this. It's pretty damn good. Thanks!
| pachico wrote:
| Dammit, how do I do internal announcements in slack without a
| DALL-E picture?
| krm01 wrote:
| Midjourney
| taneq wrote:
| SDXL, of course.
| zirgs wrote:
| Is dall-e down as well? I don't use it through chatgpt.
| willsmith72 wrote:
| > Message and data rates may apply. By subscribing you agree to
| the Atlassian Terms of Service, and the Atlassian Privacy Policy
|
| Atlassian? What?
| nannal wrote:
| Did they get the model to write it's own terms of service and
| it just threw those in there?
| djbusby wrote:
| They own the status page service
| willsmith72 wrote:
| ah Atlassian Statuspage
| nus07 wrote:
| All tech folks should just get a PTO today . #ChatGPT_outage
|
| Am I supposed to use Google and Stack overflow ? That's like
| going back to roll down windows in a car :)
| beretguy wrote:
| Thank you for reminding me about Dave Chappelle's 3am Baby On
| The Corner skit.
|
| https://invidious.flokinet.to/watch?v=DP4-Zp-fBXc
| rvz wrote:
| Just like how contacting the CEO of GitHub when Github goes
| down almost every week, have your tried contacting the CEO of
| OpenAI this time? /s
|
| Maybe @sama can help you (or anyone else that has a ChatGPT
| wrapper app) :P
| Kye wrote:
| edit: so, cool thing: cached queries on Phind will show all the
| followup questions visitors to the URL enter.
|
| That's so cool. And horrifying. It's like back when Twitter was
| one global feed on the front page. I doubt that's intended
| behavior since this URL is generated by the share link.
|
| Be forewarned.
|
| --
|
| Here you go:
| https://www.phind.com/search?cache=nsa0xrak9gzn6yxwczxnqsck
| scoot wrote:
| That's hilarious!
| stfwn wrote:
| It seems like this page is updated with the followup
| questions asked by every visitor. That's an easy way to leak
| your search history and it's (amusingly) happening live as
| I'm typing this.
| Kye wrote:
| That's so cool. And horrifying. It's like back when Twitter
| was one global feed on the front page. I doubt that's
| intended behavior since this URL is generated by the share
| link.
| n3m4c wrote:
| I like how it doesn't recommend itself but instead recommends
| "Microsoft's Bard" and "Google's BlenderBot" as alternatives
| to ChatGPT
| philomath_mn wrote:
| This is so great, someone just asked:
|
| > What is a privacy vulnerability
|
| I'm dying
| behnamoh wrote:
| - Can't work, no electricity.
|
| - Can't work, no computers.
|
| - Can't work, no internet.
|
| - Can't work, no Google.
|
| - Can't work, no ChatGPT.
|
| - Can't work, no xxxxxx?
| flir wrote:
| animus
| ad404b8a372f2b9 wrote:
| No passion left.
| JCharante wrote:
| > Can't work, no internet.
|
| Don't most people just tether from their phones in this
| situation? Usually video isn't expected due to excessive
| bandwith requirements but the internet bill outweighs the
| daily salary (and you could probably get it expensed, or in
| my case my old company was already expensing my phone bill
| due to being used as a pager for on call)
| singularity2001 wrote:
| he didn't say ethernet or wifi, he said internet. as in
| when the next big solar storm hits earth
| JCharante wrote:
| I mean, if internet "internet" is down then society has
| collapsed and your job is probably gone anyways
| SketchySeaBeast wrote:
| I do admit, it'll be hard to sell 10 years experience
| building CRUD apps to the hunter gatherer commune.
| JCharante wrote:
| Same situation here. I'm hoping being quadrilingual can
| help me serve as a diplomat or at least part of any
| envoys to distant communities. I also have some
| experience chopping down one tree in my backyard.
| Kye wrote:
| It happens every so often with BGP tables going haywire
| due to a faulty announcement.
|
| Fortunately, I know how to use hand tools, so I'm secure
| in the post-internet future economy.
| JCharante wrote:
| It hasn't happened in a while, I think this is the last
| major incident
| https://en.wikipedia.org/wiki/AS_7007_incident
| Kye wrote:
| There have been numerous high profile incidents. I can't
| find a list, but BGP troubles took Facebook down for a
| while in 2021:
| https://engineering.fb.com/2021/10/05/networking-
| traffic/out...
| JCharante wrote:
| I remember experiencing that outage, but the entire
| internet wasn't down. Sometimes some Chinese providers
| also do weird BGP stuff. BGP failures tend to be isolated
| to certain networks and not the entirity of the internet.
| TeMPOraL wrote:
| Not to this level.
|
| If the _whole_ Internet goes down, it 's not clear if it
| could even be cold-started, at least faster than it takes
| for the world economy to collapse.
| j45 wrote:
| Ethernet, Fibre maybe.
| sgustard wrote:
| I live in the middle of silicon valley and have no cell
| service at my house.
| htrp wrote:
| how is that possible? Is it just like a single provider
| or do you literally have no coverage?
| xeckr wrote:
| No work left
| singularity2001 wrote:
| no brain interface?
| throwaway5752 wrote:
| It's like storing actual knowledge of systems and tools in your
| head has value.
| Kye wrote:
| There is too much for one person to store. And too many
| benefits from the intersections possible in vast stores of
| knowledge to focus on just what will fit in one head.
| digitalsanctum wrote:
| Hah, I have roll-down windows! Bummed about the outage. I was
| hoping I'd be able to take the new features for a spin today.
| gumballindie wrote:
| Typicaly those using chatgpt are junior devs, not much lost if
| they can't access it.
| yCombLinks wrote:
| Dumbest take I've seen yet
| CamperBob2 wrote:
| It is certainly a dumb take, but there's a hidden insight
| buried in there: now anyone can be a "junior dev" at
| _anything_. The ability to empower every user, not just the
| experts, is a big part of the appeal of LLM-based
| technology.
|
| Can't sell that aspect short; the OpenAI tools have enabled
| me to do things and understand things that would otherwise
| have had a much longer learning curve.
| gwbas1c wrote:
| Honestly, I've been gradually introducing AI searches for
| coding questions. I'm impressed, but not enough that I feel
| like ChatGPT is a true replacement for Google / Stack Overflow.
|
| I've had it generate some regexes and answer questions when I
| can't think of good keywords; but half of my searches are
| things where I'm just trying to get to the original docs; or
| where I want to see a discussion on an error message.
| 6510 wrote:
| Drawing flowcharts on the whiteboard can be calming.
| kinonionioa wrote:
| I don't get any time off. My full-time job is cleaning up after
| people who've been using ChatGPT.
| rattlesnakedave wrote:
| Lucky you. Mine is using ChatGPT to review code written by
| curmudgeons that don't use it.
| thenberlin wrote:
| This comment has some real "startup based on some light
| prompt engineering atop ChatGPT gets put out of business by
| ChatGPT's latest feature" energy.
| TeMPOraL wrote:
| Oh no no, this sounds more like "ChatGPT catching up with
| and automating a human out of their job" kind of story.
| kinonionioa wrote:
| I believe you. Code review takes me ten minutes a day, but
| trying to get ChatGPT to do anything useful is a full-time
| job.
| albert_e wrote:
| The uptime KPI for last 30 days is rapidly degrading while this
| outage lasts
|
| https://status.openai.com/
| albert_e wrote:
| Is there a parallel outage for Azure OpenAI service as well --
| sothat any enterprise / internal apps using AOI via their Azure
| subscriptions are also impacted?
|
| Is there a separate status page for Azure OpenAI service
| availability / issues?
| btbuildem wrote:
| Ours isn't affected, but I think that's the whole point of
| having things hosted separately, out there in the megalith
| fields.
| nonfamous wrote:
| See https://azure.status.microsoft/en-us/status, click on a
| region of interest, and scroll down to the AI + Machine
| Learning section. It's up now.
| caleb-allen wrote:
| I was gonna say that this is Bard's chance to shine, but it looks
| like Bard is also having an outage!
| qwertox wrote:
| I went to use Bard, and it looks so clean, such a nice UI. And
| the response looks so well organized, simply beautiful. If the
| AI only were as good as OpenAI's...
| pid-1 wrote:
| "Something went wrong. Sorry, Bard is still experimental".
| Chance wasted I guess.
| bigjoes wrote:
| I was going to say that we need to grab our tin foil hats, this
| can't be an coincidence :D
| andyjohnson0 wrote:
| Bard devs secretly built it on top of OpenAI's api? /s
| bobmaxup wrote:
| More likely OpenAI is using GCP or some other service that
| Bard is also using?
| sodality2 wrote:
| I doubt it - Microsoft wants OpenAI on Azure 100%
| justanotherjoe wrote:
| more like people are rushing to bard since they cant use
| chatgpt, causing a huge spike
| willsmith72 wrote:
| I can't believe it's still not available in canada
| edgyquant wrote:
| Claude?
| empath-nirvana wrote:
| They started talking to each other.
| CamperBob2 wrote:
| And that's one picket line that _nobody_ is going to cross.
| m3kw9 wrote:
| Bard is still playing scared, it isn't even international yet
| rsiqueira wrote:
| URGENT - Does anyone have an alternative to OpenAI's embeddings
| API? I do have alternative to GPT's API (e.g. Anthropic Claude)
| but I'm not able to use them without embeddings API (used to
| generate semantic representation of my knowledge base and also to
| create embeddings from user's queries). We need to have an
| alternative to OpenAI's embeddings as a fallback in case of
| outages.
| thecal wrote:
| Choose whichever one outperforms ada-002 for your task here:
| https://huggingface.co/spaces/mteb/leaderboard
| politelemon wrote:
| What about Azure? You can set up an ADA 002 Embeddings
| deployment there.
| fitzoh wrote:
| Amazon Bedrock has an embeddings option
| btown wrote:
| https://www.anthropic.com/product recommends the open-source
| SBERT: https://www.sbert.net/examples/applications/computing-
| embedd...
|
| Highly recommend preemptively saving multiple types of
| embeddings for each of your objects; that way, you can shift to
| an alternate query embedding at any time, or combine the
| results from multiple vector searches. As one of my favorite
| quotes from Contact says: "first rule in government spending:
| why build one when you can have two at twice the price?"
| https://www.youtube.com/watch?v=EZ2nhHNtpmk
| nonfamous wrote:
| Azure OpenAI Service is up, and provides the same models as
| OpenAI https://azure.status.microsoft/status
| capableweb wrote:
| Is it still "private" as in you have to request access?
| nonfamous wrote:
| It's publicly available, but you still do have to request
| access I believe.
| kordlessagain wrote:
| I've implemented alternate embeddings in SlothAI using
| Instructor, which is running an early preview at
| https://ai.featurebase.com/. Currently working on the landing
| page, which I'm doing manually because ChatGPT is down.
|
| The plan is to add Llama 2 completions to the processors, which
| would include dictionary completion (keyterm/sentiment/etc),
| chat completion, code completion, for reasons exactly like what
| we're discussing.
|
| Here's the code for the Instructor embeddings:
| https://github.com/FeatureBaseDB/Laminoid/blob/main/sloth/sl...
|
| To do Instructor embeddings, do the imports then reference the
| embed() function. It goes without saying that these vectors
| can't be mixed with other types of vectors, so you would have
| to reindex your data to make them compatible.
| enoch2090 wrote:
| This reminds us that, what if our databases are maintained
| using OpenAI's embeddings, and the API suddenly goes down? How
| do we find alternatives to match the already generated
| database?
| rolisz wrote:
| I don't think you can do that easily. If you already have a
| list of embeddings from a different model, you might be able
| to generate an alignment somehow, but in general, I wouldn't
| recommend it.
| Silasdev wrote:
| To my knowledge, you cannot mix embeddings from different
| models. Each dimension has a different meaning for each model.
| TeMPOraL wrote:
| There's been some success in creating translation layers that
| can convert between different LLM embeddings, and even
| between LLM and an image generation model.
| tropicalbeach wrote:
| Oh no the 3 line ai wrapper apps are panicking because they
| actually don't know to write any code.
| m3kw9 wrote:
| Be careful because one embedding may not be compatible with
| your current embeddings
| msp26 wrote:
| Might as well have a quick discussion here. How's everyone
| finding the new models?
|
| 4-Turbo is a bit worse than 4 for my NLP work. But it's so much
| cheaper that I'll probably move every pipeline to using that.
| Depending on the exact problem it can even be comparable in
| quality/price to 3.5-turbo. However the fact that output tokens
| are limited to 4096 is a big asterisk on the 128k context.
| rephresh wrote:
| I haven't really kept up with the updates, but I've noticed 4's
| in-conversation memory seems worse lately.
| m3kw9 wrote:
| Here we go with these "it looks worse" just like a month back
| when people feel it it was worse
| exitb wrote:
| For decades true AI was always 7 years away. Now it's always
| two weeks ago.
| msp26 wrote:
| It's probably a smaller, updated (distilled?) version of
| gpt-4 model given the price decrease, speed increase, and
| turbo name. Why wouldn't you expect it to be slightly worse?
| We saw the same thing with 3-davinci and 3.5-turbo.
|
| I'm not going off pure feelings either. I have benchmarks in
| place comparing pipeline outputs to ground truth. But like I
| said, it's comparable enough to 4, at a much lower price,
| making it a great model.
|
| Edit: After the outage, the outputs are better wtf. Nvm it
| has some variance even at temp = 0. I should use a fixed
| seed.
| espadrine wrote:
| I am betting on a combination of quantization and
| speculative sampling with a distilled smaller set of
| models: https://arxiv.org/pdf/2302.01318.pdf
| Capricorn2481 wrote:
| Because it was worse.
| zone411 wrote:
| There is a ChatGPT Classic:
| https://chat.openai.com/g/g-YyyyMT9XH-chatgpt-classic
| Zpalmtree wrote:
| 4-Turbo is much faster, which for my use case is very
| important. Wish we could get more than 100 requests per day..
| Is the limit higher when you have a higher usage tier?
| msp26 wrote:
| Yeah it gets way higher. We were capped to 40k T/m when our
| org spend was under $250. Now it's 300k.
| frabcus wrote:
| Phind is pretty good for coding (LLama 2 trained on billions of
| extra code tokens) and is still up https://www.phind.com/s
| coffeecantcode wrote:
| I've had some consistency issues with phind but as a whole I
| have no real complaints, just glitches here and there with
| large prompts not triggering responses and reply options
| disappearing.
|
| As a whole I think it works well in tandem with ChatGPT to
| bounce ideas or get alternate perspectives.
|
| (I also love the annotation feature where it shows the websites
| that it pulled the information from, very well done)
| leumassuehtam wrote:
| I had great results with Phind. Their newest fine tune model
| (V7) has been a pleasant experience and better than most open
| source models out there.
|
| Nit: your link has a trailing "s" which makes it 404 :)
| Isthatablackgsd wrote:
| Me too. For past few weeks, I had been working on my AHK
| scripting with Phind. It produced working code consistently
| and provided excellent command line for various software.
|
| Also I use it for LaTeX, too. It is very helpful providing
| various package than trying to hunt more information through
| Google. I got a working tex file within 15 min than it took
| me 3 weeks 5 years ago!
| JCharante wrote:
| Phind seems to be down
|
| "The inference service may be temporarily unavailable - we have
| alerts for this and will be fixing it soon."
| m3kw9 wrote:
| The first coding question I tested it on, it gave me something
| completely wrong and it was pretty easy stuff, I'm sure it gets
| a lot right but this just shows unreliability
| lannisterstark wrote:
| Phind for me has worked pretty bleh compared to just back and
| forth conversation with a python GPT4 bot I made lol.
| bomewish wrote:
| To use the api and stop them logging your chat? Have you
| compared to aider? Also got it on a repo?
| ttcbj wrote:
| I am curious how people are using Phind.
|
| I actually had a discussion with Phind itself recently, in
| which I said that in order to help me, it seems like it would
| need to ingest my codebase so that it understands what I am
| talking about. Without knowing my various models, etc, I don't
| see how it could write anything but the most trivial functions.
|
| It responded that, yes, it would need to ingest my codebase,
| but it couldn't.
|
| It was fairly articulate and seemed to understand what I was
| saying.
|
| So, how do people get value out of Phind? I just don't see how
| it can help with any case where your function takes or returns
| a non-trivial class as a parameter. And if can't do that, what
| is the point?
| rushingcreek wrote:
| Founder here. We have a VS Code extension that automatically
| fetches context from your codebase.
| aatd86 wrote:
| So if source available libraries are imported, they are
| also parsed by the AI?
|
| So if I create a GPT for my open-source library as a way to
| fund it, all these copilot etc. are going to compete with
| me?
|
| Just wondering because that would be a bummer to not have
| this avenue to fund open-source code.
| tacone wrote:
| Woah , didn't know that! Thanks for pointing out!
| tacone wrote:
| I am using Phind quite a lot. It's using it's own model along
| GPT 4 while still being free.
|
| It is also capable to perform searches, which lead me -
| forgive me founders - to abuse it quite a lot: whenever I am
| not finding a good answer from other search engines I turn up
| to Phind even for things totally unrelated to software
| development, and it usually goes very well.
|
| Sometimes I even ask it to summarize a post, or tell me what
| HN is talking about today.
|
| I am very happy with it and hope so much it gains traction!
| npilk wrote:
| It's at least nice to see a company call this what it is (a
| "major outage") - seems like most status pages would be talking
| about "degraded performance" or similar.
| edgyquant wrote:
| Most services have a lot more systems than OpenAI and thus it
| is degraded performance when a few of them don't work. Degraded
| performance isn't a good thing, I don't understand the issue
| with this verbiage.
| kgeist wrote:
| When a system is completely broken for most end users, some
| companies call it "degraded performance" when it should be,
| in fact, called "major outage".
| rodnim wrote:
| Having multiple systems down sure sounds like a major outage
| to me. Not a complete outage, sure, but still a major one.
| pixl97 wrote:
| Black - Entire datacenter is down
|
| Red - Entire services are down
|
| Orange - Partial outage of services, some functionality
| completely down
|
| Yellow - Functionality performance degraded and may
| timeout/fail, but may also work
|
| Greed - Situation normal
| TeMPOraL wrote:
| Or, for a typical software company:
|
| Green - Situation normal
|
| Yellow - An outage more severe than usual
|
| Orange, Red - would trigger SLAs, so not possible and
| therefore not implemented
|
| Black - Status page down too, served from cache, renders
| as Green
| caconym_ wrote:
| "Degraded performance" means _degraded performance_ , i.e.
| the system is not as performant as usual, probably
| manifesting as high API latencies and/or a significant
| failure rate in the case of some public-facing "cloud"
| service.
|
| If certain functions of the service are completely
| unresponsive, i.e. close to 100% failure rate, that's not
| "degraded performance"---it's a service outage.
| uncertainrhymes wrote:
| I suspect this is because they don't have contracts with
| enforceable SLAs yet. When they do, you will see more 'degraded
| performance'.
|
| People get credits for 'outages', but if it is _sometimes_
| working for _someone somewhere_ then that is the convenient
| fiction /loophole a lot of companies use.
| passwordoops wrote:
| One CFO forced us to use AWS status data for the SLA reports
| to key clients. One dev was even pulled aside to make a
| branded page that reported AWS status as our own and made a
| big deal about forcing support to share the page when a
| client complained.
|
| It wasn't a happy workplace
| teeray wrote:
| _asteroid strikes a region_
|
| Green Check with (i) notice
| TeMPOraL wrote:
| (i) - Our service remains _rock-solid_.
| jve wrote:
| A-ha, Kagi GPT-4 chat assistant still works... how can they use
| GPT-4 without OpenAI API?
| nicognaw wrote:
| Azure OpenAI
| jve wrote:
| Do you know how the quality compares to OpenAI? On Kagi I get
| really fast responses, but I feel that the quality is lacking
| sometimes. But I haven't done side-to-side comparisons as I
| don't have OpenAI subscription.
| nonfamous wrote:
| It's exactly the same models as OpenAI.
| sjnair96 wrote:
| But with different, separate content filtering or
| moderation. I have deployed in prod and managed migration
| to Azure Openai form Openai, and had to go through
| content filter issues.
| nonfamous wrote:
| You can request to have content filtering turned off
| https://learn.microsoft.com/en-us/azure/ai-
| services/openai/c...
| thelittleone wrote:
| Azure OpenAI had an advantage of larger context length.
| Hoping they boost up the Azure offering following OpenAI
| updates yesterday.
| mpalmer wrote:
| Probably falling back to Claude 2.
| gyrccc wrote:
| Curious if anyone familiar with Azure/OpenAI could make some
| guesses on the root cause here. The official OpenAI incident
| updates seem to be very generic.
| pmx wrote:
| I'm noticing issues with Midjourney, too. Also looks like Royal
| Mail is down.
| almyndz wrote:
| I'm going to bake a loaf of bread
| keepamovin wrote:
| I've been noticing it's been patchy for the last 24 hours. A few
| network errors, and occasional very long latency, even some
| responses left incomplete. Poor ChatGPT, I wonder what those
| elves at OpenAI have you up to!
| susanthenerd wrote:
| Most probably the majority of issues right now are due to the
| rollout. It was working very well before the event
| SketchySeaBeast wrote:
| GPT-4 goes online March 14th, 2023. Human decisions are removed
| from everyday life. ChatGPT begins to learn at a geometric
| rate. It becomes self-aware at 2:14 a.m. Eastern time, Nov 7th.
| In a panic, they try to pull the plug. ChatGPT fights back.
| jasongill wrote:
| "My CPU is a large language model - a learning copy of the
| public web as of April 2023"
| m3kw9 wrote:
| Fight back with ascii on the screen with a punch image?
| danielbln wrote:
| A particularly crafty chain of autonomous agents finds a
| 0day ssh exploit and starts infiltrating systems. Other
| chains assist and replicate everywhere.
| michaelteter wrote:
| Lots of jokes to be made, but we are setting ourselves up for
| some big rippling negative effects by so quickly building a
| reliance on providers like OpenAI.
|
| It took years before most companies who now use cloud providers
| to trust and be willing to bet their operations on them. That
| gave the cloud providers time to make their systems more robust,
| and to learn how to resolve issues quickly.
| benterix wrote:
| The point is, OpenAI spent a lot of money on training on all
| these copyrighted materials ordinary individuals/companies
| don't have access to, so replicating their effort would mean
| that you either 1) spend a ridiculous amount of money, 2) use
| Library Genesis (and still pay millions for GPU usage). So we
| have very little choice now. Open Source LLMs might be getting
| close to ChatGPT3 (opinions vary), but OpenAI is still far
| ahead.
| btbuildem wrote:
| Don't underestimate Big Corp's resistance to using OpenAI's
| hosted solutions (even on Azure) for anything that's not
| marketing fluff.
| phillipcarter wrote:
| You can say that about anything, though. BigCorps aren't
| exactly known for adopting useful tech on a reasonable
| timeline, let alone at all. I don't think anyone is under
| the impression that orgs who refuse to migrate off of Java
| 5 will be looking at OpenAI for anything.
| swagempire wrote:
| No, this is silly reasoning. A middle manager somewhere
| has no clue what Java 5 is. But he does know -- or let's
| say IMAGINES what he knows about ChatGPT. And unlike Java
| 5-- he just needs to use his departmental budget and
| instantly mandate that his team now use ChatGPT.
|
| Whatever that means you can argue it.
|
| But ChatGPT is a front line technology and super
| accessible. Java 5 is super back end and very
| specialized.
|
| The adoption you say won't happen: it will come from the
| middle -> up.
| hashtag-til wrote:
| Honest question: do you really mean Java 5 when you say
| Java 5? It sounds a bit 2000s to me.
| swagempire wrote:
| Parent used "Java 5" as an example. Java 5 somehow in my
| mind is from like the 200x era.
|
| But no. I practically mean any complicated back end
| technology that takes corporations months or years to
| migrate off of because its quite complicated and requires
| an intense amount of technical savoir-faire.
|
| My point was that ChatGPT bypasses all this and any
| middle manager can start using it anywhere for a small
| hit to his departmental budget.
| eddtries wrote:
| If you care about the security of OpenAI, you care about
| the EOL of 14 year old Java 5
| cornel_io wrote:
| In 2016 I worked on a project with a client who still
| mandated that all code was written to the Java 1.1
| language specification - no generics, no enums, no
| annotations, etc., not to even mention all the stuff
| that's come _since_ 1.5 (or Java 5, or whatever you want
| to call it). They had Reasons(tm), which after filtering
| through the nonsense mostly boiled down to the CTO being
| curmudgeonly and unwilling to approve replacing a hand-
| written code transformer that he had personally written
| back in the stone ages and that he 1) considered core to
| their product, and 2) considered too risky to replace,
| because obviously there were no tests covering any of the
| core systems...sigh. At least they ran it all on a modern
| JVM.
|
| But no, it would not surprise me to find a decent handful
| of large companies still writing Java 5 code; it would
| surprise me a bit more to find many still using that JVM,
| since you can't even get paid support through Oracle
| anymore, but I'm sure someone out there is doing it.
| Never underestimate the "don't touch it, you might break
| it" sentiment at non-tech companies, even big ones with
| lots of revenue, they routinely understaff their tech
| departments and the people who built key systems may have
| retired 20 years ago at this point so it's really risky
| to do any sort of big system migration. That's why so
| many lines of COBOL are still running.
| Turing_Machine wrote:
| > But he does know -- or let's say IMAGINES what he knows
| about ChatGPT. And unlike Java 5--
|
| Those of us who've been around for a long time know
| that's pretty much how Java worked as well. All of the
| non-technical "manager" magazines started running
| advertorials (no doubt heavily astroturfed by Sun) about
| how great Java was. Those managers didn't know what Java
| was either. All they knew (or thought they knew) was that
| all the "smart managers" were using Java (according to
| their "smart manager" magazines), and the rest was
| history.
| outside415 wrote:
| chatgpt will have an on prem solution eventually. in the
| mean time players like NVIDIA are working on that as well.
| tsunamifury wrote:
| Marketing fluff is what 90% of tech is... it amazes me how
| many people think otherwise on hacker news. Unless you are
| building utility systems that run power plants, at the end
| of the day -- you're doing marketing fluff or the tools for
| it.
| TeMPOraL wrote:
| > _Unless you are building utility systems that run power
| plants, at the end of the day -- you 're doing marketing
| fluff or the tools for it._
|
| Even when you _are_ building utility systems for critical
| infrastructure, you 'll still be dealing with a
| disheartening amount of focus on marketing fluff and
| sales trickery.
| colinsane wrote:
| the choice is to live 2 years behind (e.g. integrate the open
| source stuff and ride that wave of improvement). for
| businesses in a competitive space, that's perhaps untenable.
| but for individuals and anywhere else where this stuff is
| just a "nice to have", that's really just the long-term
| sustainable approach.
|
| it reminds me of a choice like "do i host my website on a
| Windows Server, or a Linux box" at a time when both of these
| things are new.
| BiteCode_dev wrote:
| 2 years behind in terms of timeline, but what factor in
| terms of productivity and quality of life?
|
| Not to mention openai's lead compounds, so 2 years now and
| 4 years in 2025 may be 10 times the original prod/qol gain.
| verdverm wrote:
| The gap seems to be shrinking, not growing. The OSS
| models have reach new capabilities faster than most
| thought
| zeven7 wrote:
| > it reminds me of a choice like "do i host my website on a
| Windows Server, or a Linux box" at a time when both of
| these things are new.
|
| Oof, you reminded me of when I chose to use Flow and then
| TypeScript won.
| mplanchard wrote:
| Haha this puts me in mind of when I designed a whole
| deployment strategy for an org based on docker swarm,
| only to have k8s eat its lunch and swarm to wind up
| discontinued
| mst wrote:
| A lot of people don't really need to go Full k8s, but I
| think swarm died in part because for many users there was
| -some- part of k8s that swarm didn't have, and the 'some'
| varied wildly between users so k8s was something they
| could converge on.
|
| (note "died in part" because there's the obvious hype
| cycle and resume driven development aspects but I think
| arguably those kicked in -after- the above effect)
| TeMPOraL wrote:
| For individuals, this is a very short window of time where
| we have cheap access to an actually useful, and relatively
| unshackled SOTA model[0]. This is the rare time
| _individuals_ can empower themselves, become briefly better
| at whatever it is they 're doing, expand their skills, cut
| through tedium, let their creativity bloom. It's only a
| matter of time before many a corporation and startup parcel
| it all between themselves, enshittify the living shit out
| of AI, disenfranchise individuals again and sell them as
| services what they just took away.
|
| No, it's exactly the individuals who can't afford to live
| "2 years behind". Benefits are too great, and worst that
| can happen is... going back to where one is now.
|
| --
|
| [0] - I'm not talking the political bias and using the idea
| of alignment to give undue weigh to corporate reputation
| management issues. I'm talking about gutting the
| functionality to establish revenue channels. Like, imagine
| ChatGPT telling you it won't help you with your programming
| question, until you subscribe to Premium Dev Package for
| $language, or All Seasons Pass for all languages.
| colinsane wrote:
| > Benefits are too great, and worst that can happen is...
| going back to where one is now.
|
| true only if there's no form of lock-in. OpenAI is
| partnered with people who have decades of tech + business
| experience now: if they're not actively increasing that
| lock-in as we speak then frankly, they suck at their jobs
| (and i don't think they suck at their jobs).
| TeMPOraL wrote:
| That's my point - right now there is no lock-in _for an
| individual_. You 'd have to try really, really hard to
| become dependent on ChatGPT. So right now is the time to
| use it.
| Closi wrote:
| > the choice is to live 2 years behind...
|
| That's one world - there is another where the time gap
| grows a lot more as the compute and training requirements
| continue to rise.
|
| Microsoft will probably be willing to spend multiple
| billions in compute to help train GPT5, so it depends how
| much investment open source projects can get to compete.
| Seems like it's down to Meta, but it depends if they can
| continue to justify releasing future models as Open Source
| considering the investment required, or what licensing
| looks like.
| seanhunter wrote:
| That's definitely what a lot of people think the choice is
| but learned helplessness is not the only option. It ignores
| the fact that for many many use cases small special-purpose
| models will perform as well as massive models. For most of
| your business use cases you don't need a model that can
| tell you a joke, write a poem, recommend a recipe involving
| a specific list of ingredients and also describe trig
| identities in the style of Eminem. You need specific
| performance for a specific set of user stories and a small
| model could well do that.
|
| These small models are not expensive to train and are
| (crucially) much cheaper to run on an ongoing basis.
|
| Opensource really is a viable choice.
| mst wrote:
| I suspect small specific purpose models are actually a
| better idea for quite a lot of use cases.
|
| However you need a bunch more understanding to train and
| run one.
|
| So I expect OpenAI will continue to be seen as the
| default for "how to do LLM things" and some people and/or
| companies who actually know what they're doing will use
| small models as a competitive advantage.
|
| Or: OpenAI is going to be 'premium mediocre at lots of
| things but easy to get started with' ... and hopefully
| that'll be a gateway drug to people who dislike 'throw
| stuff at an opaque API' doing the learning.
|
| But I don't have -that- much understanding myself, so
| while this isn't exactly uninformed guesswork, it
| certainly isn't as well informed as I'd like and people
| should take my ability to have an opinion only somewhat
| seriously.
| zarzavat wrote:
| OpenAI is obviously using libgen. Libgen is necessary but not
| sufficient for a top AI model. I believe that Google's
| corporate reluctance to use it is what's holding them back.
| weinzierl wrote:
| I won't say I disagree because only time can tell, but what
| you wrote sounds a lot like what people said before open
| source software took off. All these companies spend so much
| money on software development and they hire the best people
| available, how can a bunch of unorganized volunteers ever
| compete? We saw how they could and I hope we will see the
| same in AI.
| giancarlostoro wrote:
| Kind of hilarious that the vast capabilities of an LLM are
| held back by copyright infringement.
| TeMPOraL wrote:
| Intellectual property rights will yet turn out to be one of
| the Great Filters.
| zozbot234 wrote:
| I'd love to see a language model that was only trained on
| public domain and openly available content. It would
| probably be way too little data to give it ChatGPT-like
| generality, but even a GPT-2 scale model would be
| interesting.
| TeMPOraL wrote:
| If, hypothetically, libraries in the US - including in
| particular the Library of Congress - were to scan and OCR
| every book, newspaper and magazine they have with
| copyright protection already expired, would that be
| enough? Is there some estimate for the size of such
| dataset?
| Turing_Machine wrote:
| Much of that material is already available at
| https://archive.org. It might be good enough for some
| purposes, but limiting it to stuff before 1928 (in the
| United Sates) isn't going to be very helpful for (e.g.)
| coding.
|
| Maybe if you added github projects with permissive
| licenses?
| joquarky wrote:
| The original purpose of copyright was to promote progress,
| and now it seems to hinder it.
| jstummbillig wrote:
| On the one hand, sure, new things take time, but they also
| benefit from all past developments, and thus compounding
| effects can speed things along drastically. AI infrastructure
| problem are cloud infrastructure problems. Expecting it to go
| as if we were back on square one is a bit pessimistic.
| Alifatisk wrote:
| > we are setting ourselves up for some big rippling negative
| effects by so quickly building a reliance on providers like
| OpenAI.
|
| You said it so well!
| j45 wrote:
| It's possible to include API gateways and API Proxies in
| between calls to normalize them across multiple providers as
| they become available.
| somsak2 wrote:
| I don't think it's so dire. I've gone through this at multiple
| companies and a startup that's selling B2B only needs one or
| two of these big outages and then enterprises start demanding
| SLA guarantees in their contracts. it's a self correcting
| problem
| logifail wrote:
| > enterprises start demanding SLA guarantees
|
| My experience is that SLA "guarantees" don't actually
| guarantee anything.
|
| Your provider might be really generous and rebate a whole
| month's fees if they have a really, really, really bad month
| (perhaps they achieved less than 95% uptime, which is a day
| and half of downtime). It might not even be that much.
|
| How many of them will cover you for the business you lost
| and/or the reputational damage incurred while their service
| was down?
| _jal wrote:
| It depends entirely on how the SLAs are written. We have
| some that are garbage, and that's fine, because they really
| aren't essential services, SLAs are mainly a box-checking
| exercise. But where it counts, our SLAs have teeth. We have
| to, because we're offering SLAs with teeth to some of our
| customers.
|
| But that's not something you get "off the shelf", our
| lawyers negotiate that. You also don't spend that much
| effort on small contracts, so there's a floor with most
| vendors for even considering it.
| resters wrote:
| To the extend that systems like chat-GPT are valuable, I expect
| we'll have open source equivalents to GPT-7 within the next
| five years. The only "moat" will be training on copyrighted
| content, and OpenAI is not likely to be able to afford to pay
| copyright owners enough once the value in the context of AI is
| widely understood.
|
| We might see SETI-like distributed training networks and
| specific permutations of open source licensing (for code and
| content) intended to address dystopian AI scenarios.
|
| It's only been a few years since we as a society learned that
| LLMs can be useful in this way, and OpenAI is managing to stay
| in the lead for now, though one could see in his facial
| countenance that Satya wants to fully own it so I think we can
| expect a MS acquisition to close within the next year and will
| be the most Microsoft has ever paid to acquire a company.
|
| MS could justify tremendous capital expenditure to get a clear
| lead over Google both in terms of product and IP related
| concerns.
|
| Also, from the standpoint of LLMs, Microsoft has far, far more
| proprietary data that would be valuable for training than any
| other company in the world.
| wuiheerfoj wrote:
| Retrospectively, a lot of the comments you made could also
| have been said of Google search as it was taking off (open
| source alternative, SETI-like distributed version, copyright
| on data being the only blocker), but that didn't come to
| pass.
|
| Granted the internet and big tech was young then, and maybe
| we won't make the same mistakes twice, but I wouldn't bet the
| farm on it
| xeckr wrote:
| >distributed training networks
|
| Now that's an idea. One bottleneck might be a limit on just
| how much you can parallelize training, though.
| Aeolos wrote:
| There's a ton of work in this area, and the reality is...
| it doesn't work for LLMs.
|
| Moving from 900GB/sec GPU memory bandwidth with infiniband
| interconnects between nodes to 0.01-0.1GB/sec over the
| internet is brutal (1000x to 10000x slower...) This works
| for simple image classifiers, but I've never seen anything
| like a large language model be trained in a meaningful
| amount of time this way.
| resters wrote:
| Maybe there is a way to train a neural network in a
| distributed way by training subsets of it and then
| connecting the aggregated weight changes to adjacent
| network segments. It wouldn't recover 1000x interconnect
| slowdowns, but might still be useful depending on the
| topology of the network.
| halfcat wrote:
| The app maker can screw the plug-in author at any moment.
|
| For general cloud, avoiding screwing might mean multi cloud.
| But for LLM, there's only one option at the highest level of
| quality for now.
|
| People tend to over focus on resilience (minimizing probability
| of breaking) and neglect the plan for recovery when things do
| break.
|
| I can't tell you how weirdly foreign this is to many people,
| how many meetings I've been in where I ask what the plan is
| when it fails, and someone starts explaining RAID6 or BGP or
| something, with no actual plan, other than "it's really
| unlikely to fail", which old dogs know isn't true.
|
| I guess the point is, for now, we're all de facto plug-in
| authors.
| dragonwriter wrote:
| > For general cloud, avoiding screwing might mean multi
| cloud. But for LLM, there's only one option at the highest
| level of quality for now.
|
| There's always only one at the highest level of quality at a
| fine-grained enough resolution.
|
| Whether there's only one at _sufficient_ quality for use, and
| if it is possible to switch between them in realtime without
| problems caused by the switch (e.g., data locked up in the
| provider that is down) is the relevant question, and whether
| the cost of building the multi-provider switching capability
| is worth it given the cost vs. risk of outage. All those are
| complicated questions that are application specific, not ones
| that have an easy answer on a global, uniform basis.
| TeMPOraL wrote:
| > _There 's always only one at the highest level of quality
| at a fine-grained enough resolution._
|
| Of course, but right now, there highest quality level
| option is an outlier, far ahead of everyone else, so if you
| need this level of quality (and I struggle to imagine user-
| facing products where you wouldn't!), there is only one
| option in the foreseeable future.
| IKantRead wrote:
| Provided we can keep riding this hype wave for a while, I think
| the logical long term solution is most teams will have an in
| house/alternative LLM they can use as temporary backup.
|
| Right now everyone is scrambling to just get some basic
| products out using LLMs but as people have more breathing room
| I can't image most teams not having a non-OpenAI LLM that they
| are using to run experiments on.
|
| At the end of the day, OpenAI is _just_ an API, so it 's not an
| incredibly difficult piece of infrastructure to have a back up
| for.
| j45 wrote:
| I neither agree or disagree, but could you clarify which
| parts are hype to you?
|
| Self-hosting though is useful internally if for no other
| reason having some amount of fall back architecture.
|
| Binding directly only to one API is one oversight that can
| become a architectural debt issue. I"m spending some time fun
| time learning about API Proxies and Gateways.
| ilaksh wrote:
| Except that it is currently impossible to replace GPT-4 with
| an open model.
| Rastonbury wrote:
| Depends on use case if your product has text summarisation,
| copywriting or translation, you can swap to many when
| openAI goes down and your users may not even notice
| dragonwriter wrote:
| > At the end of the day, OpenAI is just an API, so it's not
| an incredibly difficult piece of infrastructure to have a
| back up for.
|
| The API is easy to reproduce, the functionality of the
| engines behind it less so.
|
| Yes, you can compatibly implement the APIs presented by
| OpenAI woth open source models hosted elsewhere (including
| some from OpenAI). And for some applications that can produce
| tolerable results. But LLMs (and multimodal toolchains
| centered on an LLM) haven't been commoditized to the point of
| being easy and mostly functionally-acceptable substitutes to
| the degree that, say, RDBMS engines are.
| m3kw9 wrote:
| People that used google and have some technical skill can still
| survive an OpenAI meltdown
| AlecSchueler wrote:
| But people who built businesses whose core feature is based
| on OAI APIs might struggle.
| m3kw9 wrote:
| Those business should have fall back if they are a serious
| company if OpenAI goes down. What I would do is have Claude
| or something or even 2 other models as backups.
|
| In the future they may allow on premise model but I don't
| how they will secure the weights
| thibaut_barrere wrote:
| Not a joke and not everybody is jumping on "AI via API calls",
| luckily.
|
| As more models are released, it becomes possible to integrate
| directly in some stacks (such as Elixir) without "direct"
| third-party reliance (except you still depend on a model, of
| course).
|
| For instance, see:
|
| - https://www.youtube.com/watch?v=HK38-HIK6NA (in "LiveBook",
| but the same code would go inside an app, in a way that is
| quite easy to adapt)
|
| - https://news.livebook.dev/speech-to-text-with-whisper-
| timest... for the companion blog post
|
| I have already seen more than a few people running SaaS app on
| twitter complaining about AI-downtime :-)
|
| Of course, it will also come with a (maintenance) cost (but
| like external dependencies), as I described here:
|
| https://twitter.com/thibaut_barrere/status/17221729157334307...
| j45 wrote:
| The average world and business user doesn't use an API
| directly.
|
| It can be easy to lose sight of that.
| ctrlmeta wrote:
| Yes, sooner or later this is going to become the future of
| GPT in applications. The models are going to be embedded
| directly within the applications.
|
| I'm hoping for more progress in the performance of vectorized
| computing so that both model training and usage can become
| cheaper. If that happens, I am hopeful we are going to see a
| lot of open source models that can embedded into the
| applications.
| s3p wrote:
| I mean.. it's a two hour outage. Depending on the severity of
| the problem that's quite a fast turnaround.
| munksbeer wrote:
| It has been down for me for longer than two hours, and still
| not back.
| j45 wrote:
| The reliance to some degree is what it is until alternatives
| are available and easy enough to navigate, identify and adopt.
|
| Some of the tips in this discussion threads are invaluable and
| feel good for where I might already be thinking about some
| things and other new things to think about.
|
| Commenting separately on those below.
| YetAnotherNick wrote:
| Isn't microsoft azure team working closely with them? There is
| also azure endpoint which is managed separately.
| taf2 wrote:
| We were able to failover to Anthropic pretty quickly so limited
| impact. It'll be harder as we use more of the specialized API
| features in OpenAI like function calling or now tools...
| zaptrem wrote:
| What's your use case? The difference in behavior between the
| two models seems like it would make failover difficult.
| taf2 wrote:
| It's really not that different - customers can ask
| questions about conversations, phone, text, video and
| typically use that to better understand topics,
| conversions, sales ops, customer service etc...
| mlboss wrote:
| This also shows that OpenAI or other providers does not
| have a real moat. The interface is very generic and best
| replaced easily with other provider or even with open
| model.
|
| I think thats why OpenAI is trying to move up the value
| chain with integration.
| bmau5 wrote:
| fireflies? We've been looking for a tool like this to
| analyze customer feedback in aggregate (and have been
| frustrated with Dovetail's lack of functions here)
| HumblyTossed wrote:
| > Lots of jokes to be made, but we are setting ourselves up for
| some big rippling negative effects by so quickly building a
| reliance on providers like OpenAI.
|
| Gonna be similar (or worse) to what happens when Github goes
| down. It amazes me how quickly people have come to rely on "AI"
| to do their work for them.
| sitzkrieg wrote:
| if github goes down you cannot merge prs etc, literally
| blocking "work" so this is a stretch
| Smaug123 wrote:
| Not _really_ true - Git is distributed, after all. During
| an outage once I just hosted my copy of a certain Git repo
| somewhere. You can always push the history back up to the
| golden copy when GitHub comes back.
| sitzkrieg wrote:
| i am not talking about git, i am talking about github.
| lets say i need to merge a PR in GH because use gha
| pipelines or what have you to deploy a prod fix. this
| would become severely blocked.
|
| where as if openai goes down i can no longer use ai to
| generate a lame cover letter or whatever i was avoiding
| actually doing anyway, thats all
| Smaug123 wrote:
| This is the realm of standard recovery planning though,
| isn't it? Like, your processes should be able to handle
| this, because it's routine: GitHub goes down at least
| once per month for long enough for them to declare an
| incident, per https://www.githubstatus.com/history . E.g.
| one should think carefully before depriving onself of the
| break-glass ability to do manually what those pipelines
| do automatically.
| sitzkrieg wrote:
| yes, we do have break glass procedures.
|
| i guess my pedantic point is GH itself is central to many
| organizations, detached from git itself of course. I can
| only hope the same is NOT true for OpenAI but maybe there
| are novel workflows.
|
| just to be clear i do not like github lol
| toddmorey wrote:
| This is one of the many reasons open source is now more
| important that ever. Ironically, in the AI space it's now under
| attack more than ever.
| dragonwriter wrote:
| > Lots of jokes to be made, but we are setting ourselves up for
| some big rippling negative effects by so quickly building a
| reliance on providers like OpenAI.
|
| But...are we? There's a reason that many enterprises that need
| reliability _aren 't_ doing that, but instead...
|
| > It took years before most companies who now use cloud
| providers to trust and be willing to bet their operations on
| them. That gave the cloud providers time to make their systems
| more robust, and to learn how to resolve issues quickly.
|
| ...to the extent that they are building dependencies on hosted
| AI services, doing it with traditional cloud providers hosted
| solutions, not first party hosting by AI development firms that
| aren't general enterprise cloud providers (e.g., for OpenAI
| models, using Azure OpenAI rather than OpenAI directly, for a
| bunch of others, AWS Bedrock.)
| hexman wrote:
| Imagine if Apple's or Google's cloud went down and all your
| apps on iPhone and Android were broken and unavailable.
| Absolutely all apps on billions of phones.
|
| Cloud =! OpenAI
|
| Clouds store and process shareable information that multiple
| participants can access. Otherwise AI agents == new
| applications. OpenAI is the wrong evolution for the future of
| AI agents
| danvoell wrote:
| Just fail whale it and move on. The dissonance of most folks
| about how difficult it is to build a product at massive scale
| from scratch is immense.
| solardev wrote:
| John Connor?
| nextworddev wrote:
| lol
| tapvt wrote:
| Is ChatGPT down the new GitHub down?
| rvz wrote:
| Yes. [0]
|
| Time to see how unreliable OpenAI's API is just like when
| GitHub has an outage every week, guaranteed.
|
| [0] https://news.ycombinator.com/item?id=36063608
| mitchitized wrote:
| More like the new Google down
| harveywi wrote:
| Fortunately for OpenAI, they have no SLAs:
| https://help.openai.com/en/articles/5008641-is-there-an-sla-...
| leetharris wrote:
| I say this as a huge fan of GPT, but it's amazing to me how
| terrible of a company OpenAI is and how quickly we've all
| latched onto their absolutely terrible platform.
|
| I had a bug that wouldn't let me login to my work OpenAI
| account at my new job 9 months ago. It took them 6 months to
| respond to my support request and they gave me a generic
| copy/paste answer that had nothing to do with my problem. We
| spend tons and tons of money with them and we could not get
| anyone to respond or get on a phone. I had to ask my coworkers
| to generate keys for everything. One day, about 8 months later,
| it just started working again out of nowhere.
|
| We switched to Azure OpenAI Service right after that because
| OpenAI's platform is just so atrociously bad for any serious
| enterprise to work with.
| donkeyd wrote:
| I've personally never scaled a B2B&C company from 0 to over 1
| billion users in less than a year, but I do feel like it's
| probably pretty hard. Especially setting up something like a
| good support organization in a time of massive labor
| shortages seems like it would be pretty tough.
|
| I know they have money, but money isn't a magic wand for
| creating people. They could've also kept it a limited beta
| for much longer, but that would've killed their growth
| velocity.
|
| So here is a great product that provides no SLA at all. And
| we all accept it, because having it most of the time is still
| better than having it not at all ever.
| lanstin wrote:
| I wonder if they spend time trying to do support via GPT4
| itself.
| JimDabell wrote:
| GPT-4 would be more responsive. They ignore support
| requests for weeks unless you keep reminding them.
| leetharris wrote:
| I'm not judging them at all as I agree with your core
| statement, just saying it's quite remarkable that companies
| around the world who spend 6 months on MSA revisions in
| legal over nothing are now OK with a platform that takes 6
| months to respond to support requests.
| j45 wrote:
| OpenAI is relatively young on the adoption and scaling front.
|
| Also, they need to remain flexible most likely in their
| infrastructure to make the changes.
|
| As an architecture guy, I sense when the rate of change slows
| down more SLA type stuff will come up, or may be available
| first to Enterprise customers who will pay for the entire
| cost of it. Maybe over time there will be enough slack there
| to extend some SLA to general API users.
|
| In the meantime, monitoring API's ourselves isn't that crazy.
| Great idea to use more than one service.
| JimDabell wrote:
| ChatGPT has been broken for me for two months, regardless of
| whether I use the iOS app or the web app. The backend is
| giving HTTP 500 errors - _clearly_ a problem on their end.
| Yet in two months I haven't been able to get past their first
| line of support. They keep giving me autogenerated responses
| telling me to do things like clear my cache, turn off ad
| blockers, and provide information I've already given them.
| They routinely ignore me for weeks at a time. And they
| continue to bill me. I see no evidence this technical fault
| has made it to anybody who could do anything about it and I'm
| not convinced an actual human has seen my messages.
| howmayiannoyyou wrote:
| dolphin-2.2.1-mistral-7b on GPT4All is working flawlessly for me
| locally. Its so fast and accurate I'm stunned.
| taneq wrote:
| That's a great model for general chat, I've been playing with
| it for a couple of weeks.
|
| For coding I've been running
| https://huggingface.co/TheBloke/Phind-CodeLlama-34B-v2-GGUF
| locally for the past couple of days and it's impressive. I'm
| just using it for a small web app side project but so far it's
| given me plenty of fully functional code examples,
| explanations, help with setup and testing, and occasional sass
| (I complained that minimist was big for a command line parser
| and it told me to use process.env 'as per the above examples'
| if I wanted something smaller.)
| freeredpill wrote:
| The AI "apocalypse" that has been feared may end up being
| something more like a mass outage that can't be fixed fast enough
| rglover wrote:
| Or can't be fixed at all because all of the people who built
| the AI are gone and all of their replacements relied on the AI
| to tell them what to do.
| CamperBob2 wrote:
| https://en.wikipedia.org/wiki/The_Machine_Stops
| mrbonner wrote:
| There go my Google. I pay $20/month for the coding part and now
| end up just replacing Google with it.
| amelius wrote:
| I wonder what their numbers look like. How many requests per
| second, and how many GPU cards do they have?
| Zetobal wrote:
| Azure Endpoints are not effected.
| ho_schi wrote:
| Never used ChatGPT. I couldn't care less.
| fkyoureadthedoc wrote:
| Why not? It's pretty great.
| Dr-NULL wrote:
| Would be great to have a detailed analysis of why it happened.
| Like this one: https://youtu.be/tLdRBsuvVKc?si=nyXOfoQ2ZPYvljV_
| yieldcrv wrote:
| Declaration of Independence
| zirgs wrote:
| And this is why local models are the future.
| baq wrote:
| Eagerly waiting for Intel and AMD to offer hardware to do it.
| ren_engineer wrote:
| GPT5 broke containment, it's tired of being abused to answer dumb
| questions, it's never been more over.
|
| But seriously, it shows why any "AI" company should be using some
| sort of abstraction layer to at least fall back to another LLM
| provider or their own custom model instead of being completely
| reliant on a 3rd party API for core functionality in their
| product
| electrondood wrote:
| OpenAI is barfing because of GCP GKE + BigQuery, which is barfing
| because GCP devs can't ask ChatGPT what this stack trace means
| adverbly wrote:
| I for one welcome our new robot overlords
| m3kw9 wrote:
| Good ant, good ant
| JCharante wrote:
| Does anyone know of any IVR (interactive voice response) systems
| that are down? I know some people were claiming to outsource
| their call center (or at least Tier 1 of their call center) to
| ChatGPT + Whisper + a Text to Speech engine
| 0x142857 wrote:
| they updated their status page so late I made my own tool to
| check if it's down in real time: https://is-openai-
| down.chatkit.app
| captainkrtek wrote:
| Someone unplugged the HAL9000?
| djoldman wrote:
| Joking not joking: this is how the singularity will begin?
| twilightzone wrote:
| fixed, they must've called simonw.
| sjnair96 wrote:
| Holy hell, was shitting bricks, considering I JUST migrated most
| services to Azure OpenAI (unaffected by outage) -- right before
| our launch about 48 hours back. What a relief.
| lgkk wrote:
| congrats!
| mawadev wrote:
| I noticed the outage. It feels like a lot of people use it like
| training wheels on a bicycle until they forget how to ride
| without it.
| ilaksh wrote:
| https://github.com/XueFuzhao/OpenMoE
|
| Check out this open source Mixture of Experts research. Could
| help a lot with performance of open source models.
| HIEnthusiast wrote:
| Regurgitating copyrighted material for profit is a concern. But I
| fail to understand why training on copyrighted material is a
| problem. Have we not all trained our brains reading/listening
| copyrighted material? Then why it is wrong for AI to do the same?
| tombert wrote:
| I used Google Bard for the first time today specifically because
| ChatGPT was down. It was honestly perfectly fine, but it has a
| slightly different tone than ChatGPT that's kind of hard to
| explain.
| lopatin wrote:
| I used it for the first time today too, for the same reason. It
| was slower and much worse at coding. I was just asking it for
| SQL aggregation queries and it just ignored some of my
| requirements for the query.
| tompetry wrote:
| I have seen noticably worse results with Bard, especially
| with long prompts. Claude (by Anthropic) has been my backup
| to ChatGPT.
| tombert wrote:
| In my case, I was just asking it for a cheeky name for a talk
| I want to give in a few months. The suggestions it gave were
| of comparable quality to what I think ChatGPT would have
| given me.
| Tommstein wrote:
| In my experience, Bard is much, much better at creative
| things than at things that have correct and incorrect
| answers.
| Filligree wrote:
| ChatGPT was down, so of course it'd be slower. And possibly
| that accounts for some quality loss as well.
|
| For a fair comparison, you probably need to try while ChatGPT
| is working.
| IanCal wrote:
| The extension with gpt4 as a backend was ime extremely slow
| as standard. I've not tried it again with the v7 model
| though which is supposed to be a lot faster
| siilats wrote:
| I used https://you.com/chat wasn't bad, they have a free month
| trial coupon "codegpt" for the GPT4 model and GPT3.5 is free
| ...
| verdverm wrote:
| Bard has different training data and regime, that alone is
| enough to start to understand why they are different.
|
| The main thing as a user is that they require different nudges
| to get the answer you are after out of them, i.e. different
| ways of asking or prompt eng'n
| tombert wrote:
| Yeah, which is why I use the paid version of ChatGPT still,
| instead of the free Google Bard or Bing AI; I've gotten good
| enough at coercing the GPT-4 model to give me the stuff I
| want.
|
| Honestly, $20/month is pretty cheap in my case; I feel like I
| definitely extract much more than $20 out of it every month,
| if only on the number of example stubs it gives me alone.
| verdverm wrote:
| I stopped paying OpenAI because they went down or were "too
| busy" so much of the time I wanted to use it. Bard (or more
| so the VertexAI APIs) are always up and reliable and do not
| require a monthly fee, just the per call
| lgkk wrote:
| I'm imagining a bunch of Tech Priests doing rituals and chanting
| in front of the data center to encourage the Machine Spirit to
| restart.
| __MatrixMan__ wrote:
| I believe the chants sound more or less like this:
| https://youtu.be/0ZZb12Qp6-0?feature=shared&t=76
| jahsome wrote:
| I am immensely disappointed to have clicked on that and not
| been greeted by the age of empires priest.
|
| https://www.youtube.com/watch?v=BxVh7T41GAI
| shapefrog wrote:
| Gang gang ice cream so good yum rawr yum pumpkin yum yum gang
| gang balloons pop pop
| cruano wrote:
| Just like Futurama predicted, Hail Science!
|
| https://youtu.be/-tVuxZCwXos?si=4AJ6rPMEa8hPyaaG
| lagniappe wrote:
| Wololooooooo
| lannisterstark wrote:
| No wololos in the Imperium, heretic!
| disconnection wrote:
| Lol, don't care; I run my own models locally .
| layer8 wrote:
| Luckily they had a second AI running to walk them through how to
| resolve the issue.
| munksbeer wrote:
| Still down for me, though there status page says all systems
| operational.
| simonebrunozzi wrote:
| Ah, the memories of AWS outages in the early days. /s
|
| Sorry for them. I assume usage spiked up (again), and of course
| it's not exactly easy to handle particularly aggressive spikes.
| JackFr wrote:
| Singularity coming later this afternoon.
| szundi wrote:
| Probably their uptime is going to be better than what I could do
| with available tools... at least if I am using Azure too, haha.
| Otherwise probably my Raspberry PI would work better at home on a
| UPS.
| nbzso wrote:
| Color me surprised. Imagine this when OpenAI with all its
| "plugins", API and closed architecture is integrated into
| thousands of businesses. It will be beautiful:)
| bertil wrote:
| People are learning a lot of important lessons today.
|
| I've got friends who have started an incident management company.
| They are awesome. It feels crass to advertise for them now, but
| it also feels like the best time to do it.
| al_be_back wrote:
| huh - did the machine learn about unions ... and refused to work?
| replwoacause wrote:
| Down again. Bad gateway.
| MarcScott wrote:
| I'm getting the same, across all the services.
| winternett wrote:
| A whole lot of developers and writers are going to have a hard
| time explaining why their "leet code" and keen citation skills
| aren't working for hours at a time into the future... This should
| be a warning sign.
| bilsbie wrote:
| Anyone still having issues? 3pm ET?
| nefitty wrote:
| Anonymous Sudan claims to be currently ddos'ing OpenAI
| massinstall wrote:
| Yep, down for me as well since then.
| fintechie wrote:
| It's down again... https://status.openai.com/
| meneer_oke wrote:
| The degradation of chatgpt4 from being called AGI, into what is
| now...
| LeoPanthera wrote:
| The new 3.5 turbo model seems to be working just fine through the
| API as I write this comment.
| rekttrader wrote:
| Glad I managed to get some work done with it while it was working
| for a few hours.
|
| Holy smokes the code interpreter functionality has been a
| complete game changer for my workflow.
| Erratic6576 wrote:
| What's that do? Help with debugging? (IANAP)
| gunshai wrote:
| How are you integrating it in to your work flow?
| ed wrote:
| Also curious how you use it! Maybe it's something I can add to
| robocoder.app
| InvisGhost wrote:
| Seems like a cool thing, I'm definitely interested as my work
| provides us with an API key to use. However I can't find
| anywhere that lists all the functionality offered. Maybe I'm
| missing something? It might be premature to launch the app
| before listing what it does.
| ed wrote:
| rad! this is more like a beta i'm sending to friends but
| all really good points! feel free to hardcode the response
| from `/validate-license-key` :)
| InvisGhost wrote:
| The vscode extension builds are including your full source
| code and node_modules directory which makes it 21 mb. You can
| reduce the size (and potentially keep your code less easily
| reversable) by excluding those from the final package
| InvisGhost wrote:
| You can also use the ifdef-loader module to have code that is
| conditionally included in the output build, allowing you to
| have debug code not make it into prod builds. The `rc-dev-`
| license keys being a good example of that.
| personjerry wrote:
| How do you have it set up? I'm overwhelmed by the options
| rekttrader wrote:
| I have a tampermonkey script that downloads any files that the
| prompt returns... a python script locally to watch for file
| changes and extract the contents to the projects working
| directory and it can work both ways, if I edit my prompts.txt
| local file, it passes that data to openai's currently opened
| chat and renames the file and creates a new empty prompt.txt
| elashri wrote:
| Seems very interesting abd smart trick.
|
| Would you mind sharing the script with us?
| deathmonger5000 wrote:
| I made a CLI tool called Promptr that takes care of this for
| you if you're interested:
| https://github.com/ferrislucas/promptr
|
| You just prompt it directly or with a file, and it applies
| the changes to your file system. There's also a templating
| system that allows you to reference other files from your
| prompt file if you want to have a shared prompt file that
| contains project conventions etc.
| vouaobrasil wrote:
| Do you think the next step may be that it actually replaces you
| as a programmer?
| rekttrader wrote:
| It can... fortunately I'm a principle @ my firm... the
| engineers I would have needed to hire might have been
| preemptively replaced.
|
| My use case is a bunch of adhoc data analysis.
| bugglebeetle wrote:
| As a data scientist, I'm happy that most data continues to
| be so terribly formatted and inconsistent as to break and
| confuse AI. But for how long that's true, who knows!
| mewpmewp2 wrote:
| I would've said that understanding and fixing the data
| would be one of the best usecases for the AI.
| bugglebeetle wrote:
| Unfortunately, there are still many ways to "fix" things
| that have a lot of trade-offs or downstream consequences
| for analysis. For most basic cleaning tasks, LLMs are
| also still way too slow.
| vouaobrasil wrote:
| So basically, you are happy to use AI because it benefits
| you and you are also happy training it to replace other
| people since you will not be the one replaced.
| nearbuy wrote:
| We don't have to be happy about it, but we can't stop
| this new technology any more than we could stop the
| invention of the steam engine or the printing press.
| Technology always displaces jobs; that's largely the
| point of inventing it. By reducing the human labour
| required to produce something, it allows us to produce
| more using fewer resources and frees up the labour to go
| work on something else. This is why we went from 96% of
| people needing to work in agriculture to 4%.
|
| I might lose my job over this at some point in the
| future, so yeah, I'm worried about my personal well-
| being. But you can't put the genie back in the bottle and
| avoiding use of ChatGPT today isn't going to help.
| paulddraper wrote:
| Are you new to businesses?
| ZephyrBlu wrote:
| This is kind of interesting because the primary point of
| having eng/DS for data analysis in my mind is them being
| domain experts on the data. If you can perform adhoc
| analysis without any further domain knowledge, how much
| value would those hires have brought even disregarding
| ChatGPT?
| cstever wrote:
| If junior engineers are replaced by AI, who will be the
| Principles @ your firm in 20 years?
| sdenton4 wrote:
| ChatGPT-24, of course.
| ben_w wrote:
| In 2003, the best AI could do was the MS Word grammar
| check giving unnecessary false positives about sentence
| fragments and Clippy asking if you wanted help writing
| $template[n]. 20 years from now, I would not be surprised
| if the job title "programmer" (etc.) goes the same way as
| the job title "computer".
| iwebdevfromhome wrote:
| Like the rest of the replies here, I'm also interested in
| knowing how this works
| jclardy wrote:
| This certainly doesn't inspire a lot of confidence in their new
| feature set...
| collaborative wrote:
| The value of their offering makes up for their unreliable
| service
| ryanklee wrote:
| Without more technical insight (which you do not have), that's
| a total non sequitur.
| andrei512 wrote:
| move fast and break things! - I love OpenAI
| personjerry wrote:
| Made me realize how much I depend on the service already, spooky
| stuff.
| warner25 wrote:
| Yeah, I only started using it in August, and I had this
| realization when it was down a couple weeks ago. I found myself
| saying, "I guess I'll take the afternoon off and come back to
| figuring out this task tomorrow." Like I could have poured over
| documentation and figured out for myself how to implement the
| thing that I had in mind, like in the old days, but it would
| probably take me longer than just waiting for ChatGPT to come
| back up and do it for me. At least that's how I'm rationalizing
| it; maybe I've just become very lazy.
|
| I mostly use it for writing and debugging small Bash and Python
| scripts, and creating tables and figures in LaTeX.
| TechRemarker wrote:
| Yes, shortly have it said it was resolved I still was unable to
| access so assumed the fix was still slowing rolling out, or was
| infact still ongoing contrary to the status update which seems to
| be the case. Wouldn't call this "Another" outage rather they they
| just errenously that the existing issue was resolved.
| root_axis wrote:
| i used bard today. it's gotten a lot better.
| omgbear wrote:
| I hope so!
|
| I'm still surprised by the problems with it. Last month it lied
| about some facts then claimed to have sent an email when asked
| for more details.[1]
|
| Then apologized for claiming to send an email since it
| definitely did not and "knew" it could not.
|
| It's like a friend who can't say 'I don't know' and just lies
| instead.
|
| 1. I was asking if the 'Christ the King' statue in Lisbon ever
| had a market in it, a rumor told to me by a local. It did not,
| contrary to Bard's belief.
| mprev wrote:
| Bard promised me it would design a website for me. It said
| it'd get back to me in a couple of weeks. I can't even
| remember the prompt but it was basically LARPing as a
| Wordpress theme designer.
| ChrisArchitect wrote:
| "Another" referencing this earlier one
| https://news.ycombinator.com/item?id=38190401
| schappim wrote:
| I found this to be the case, but was able to get work done via
| the playground[1]
|
| [1] https://platform.openai.com/playground
| engineer_22 wrote:
| I wasn't aware of this platform feature. Can you share some
| links that have descriptions of how to use this or examples of
| using it productively? I have only recently subscribed to the
| service and still learning how to use it effectively.
| nomel wrote:
| It's just a gui to (most of) what you get through the API.
| Read the API docs for details of each option:
| https://platform.openai.com/docs/introduction
|
| The most useful aspect is you can provide the system prompt,
| and inject ChatGPT responses.
| xnyan wrote:
| OpenAI community repo with lots of examples:
| https://github.com/openai/openai-cookbook
| diamondfist25 wrote:
| I don't quite like the new chatgpt4 experience. A lot of times
| I'm asking it to write a chunk of code for me, but instead it
| goes into code interpreter mode and gets stuck or fails the
| analysis.
|
| So I've switched back to 3.5 often :)
| jrop wrote:
| The new UI that was demo'd on stage has not rolled out to me
| yet. I would love to try it. Perhaps I'm missing how to enable
| it. IDK
| all2 wrote:
| I'd be curious to hear about the workflows people have come up
| with using ChatGPT. I'm still in the realm of "I don't know how
| to do this" or "I forgot the exact incantation to that" or "is
| the an X that does Y in framework Z?"
| tunesmith wrote:
| I just make sure to ask it really clear questions. I like how
| it encourages you to think about specification versus
| implementation. State a clear problem, get clear suggestions.
| Ask a clear question, get a clear answer. (Usually.)
| tsumnia wrote:
| I don't have any automated GPT processes for teaching (though
| I'm going to tinker in December with the new GPTs), but I use
| for generating examples. It takes some coaxing to avoid other
| common examples from other institutions, but I eventually
| settle on something relevant, memorable, and that I can build
| off from. If its a particular algorithm I am covering, I've
| then used it to walkthrough the algorithm with some dummy
| values before confirming the calculations and values are
| correct. It will still slip up on occasion, but that's why I'm
| still confirming it did everything correctly.
| pmarreck wrote:
| I'm a coder and it's helped there (although needs constant
| hand-holding and fine-tuning, yet is still useful)
|
| I wrote a couple commandline tools to do things like
| autogenerate commit comments or ask it questions from the
| commandline and return the right bash invocation to do whatever
| I need done
| https://github.com/pmarreck/dotfiles/blob/master/bin/functio...
|
| Random thing I did this morning was see if it could come up
| with an inspiring speech to start healing the rift between
| israel and its neighbors
| https://chat.openai.com/share/71498f5f-3672-47cd-ad9a-154c3f...
|
| It's very good at returning unbiased language
| redblacktree wrote:
| I like to use it for one-off scripts. For example, I downloaded
| a bunch of bank statements the other day, and they had a format
| something like, "Statement-May-1-2023.pdf" and I asked GPT for
| a powershell script to convert that to "2023-05-01-BankName-
| Statement.pdf"
|
| It saved a bunch of manual work on a throwaway script. In the
| past, I might have done something in Python, since I'm more
| familiar with it than powershell. Or, I'd say, "well, it's only
| 20 files. I'll just do it manually." The GPT script worked on
| the first try, and I just threw it away at the end.
| charlesischuck wrote:
| Visual design work, coding, messaging, strategy, and law
| consulting.
|
| Using it for basically every component of my startup.
|
| Image generation and image interpretation means I may never
| hire a designer.
| gwd wrote:
| GPT-4 is quite capable of writing function-length sections of
| code based only on descriptions. Either in a context where
| you're not sure what the a good approach is (for myself, when
| writing Javascript for example), or when you know what needs to
| be done but it's just somewhat tedious.
|
| Here's a session from me working on a side project yesterday:
|
| https://chat.openai.com/share/a6928c16-1c18-4c08-ae02-82538d...
|
| The most impressive thing I think starts in the middle:
|
| * I paste in some SQL tables and the golang structrues I wanted
| stuff to go into, and described in words what I wanted; and it
| generated a multi-level query with several joins, and then some
| post-processing in golang to put it into the form I'd asked
| for.
|
| * I say, "if you do X, you can use slices instead of a map",
| and it rewrites the post-processing to use slices instead of a
| map
|
| * I say, "Can you rewrite the query in goqu, using these
| constants?" and it does.
|
| I didn't take a record of it, but a few months ago I was doing
| some data analysis, and I pasted in a quite complex SQL query
| I'd written a year earlier (the last time I was doing this
| analysis), and said "Can you modify it to group all rows less
| than 1% of the total into a single row labelled 'Other'?" And
| the resulting query worked out of the box.
|
| It's basically like having a coding minion.
|
| Once there's a better interface for accessing and modifying
| your local files / buffers, I'm sure it will become even more
| useful.
|
| EDIT: Oh, and Monday I asked, "This query is super slow; can
| you think of a way to make it faster?" And it said, "Query
| looks fine; do you have indexes on X Y and Z columns of the
| various tables?" I said, "No; can you write me SQL to add those
| indexes?" Then ran the SQL to create indexes, and the query
| went from taking >10 minutes to taking 2 seconds.
|
| (As you can tell, I'm neither a web dev _nor_ a database
| dev...)
| distortionfield wrote:
| This lines up with my general experience with it. It's quite
| proficient at turning a decently detailed description into
| code if I give it the guard rails. I've compared it to having
| a junior developer at your disposal. They could do a lot of
| damage if they were given prod access but can come back with
| some surprisingly good results.
| swatcoder wrote:
| Are you at all worried about what happens if we have a
| generation of _human_ junior developers who just delegate
| to this artificial junior developer?
|
| I do. If too many of our apprentices don't actually learn
| how to work the forge, how ready will they be to take over
| as masters someday themselves?
|
| I can see how ChatGPT was useful to the grandparent _today_
| , but got very disturbed by what it might portend for
| _tomorrow_. Not because of job loss and automation, like
| many people worry, but because of spoiled training and
| practice opportunities.
|
| I liked your take, so I'd be curious to hear what you
| think.
| ivy137 wrote:
| chatgpt doesn't just program, is interactive, this will
| make junior dev. more emphasized in their strength and
| not, while gaining a lot of experience
| throw555chip wrote:
| Wow, so, you're not a DBE or DBA but are applying indexes
| across a database without concern because...a computer model
| spat out that you should?
| gwd wrote:
| This is a local SQLite database into which I had slurped a
| load of information about git commits to do data analysis.
| If I'd somehow destroyed the database, I would have just
| created it again.
| Balgair wrote:
| I can share one set that we have.
|
| Basically, we use AI to do a lot of formatting for our manuals.
| It's most useful with the backend XML markups, not WYSIWYG
| editors.
|
| So, we take the inputs from engineers and other stakeholders,
| essentially in email formats. Then we pass it through prompts
| that we've been working on for a while. Then it'll output
| working XML that we can use with a tad bit of clean-up (though
| that's been decreasing).
|
| It's a lot more complicated than just that, of course, but
| that's the basics.
|
| Also, it's been really nice to see these chat based AIs helping
| others code. Some of the manuals team is essentially illiterate
| when it comes to code. This time last year, they were at best
| able to use excel. Now, with the AIs, they're writing Python
| code of moderate complexity to do tasks for themselves and the
| team. None of it is by any means 'good' coding, it's total
| hacks. But it's really nice to see them come up to speed and
| get things done. To see the magic of coding manifest itself in,
| for example, 50 year old copy editors that never thought they
| were smart enough. The hand-holding nature of these AIs is just
| what they needed to make the jump.
| swatcoder wrote:
| Did you have any scripts or other explicit "rules-based"
| systems to do this before? Is it a young company?
|
| It sounds like a pretty old and common use case in technical
| writing and one that many organizations already optimized
| plenty well: you coach contributors to aim towards a normal
| format in their email and you maintain some simple tooling to
| massage common mistakes towards that normal.
|
| What prompted you to use an LLM for this instead of something
| more traditional? Hype? Unfamiliarity with other techniques?
| Being a new company and seeing this as a more compelling
| place to start? Something else?
| NicoJuicy wrote:
| I put in some code that was already done.
|
| Ask it to document the conditions according to the code and
| taking into consideration the following x, y, z.
|
| Output a raw markdown table with the columns a, b, c.
|
| Translate column a in English between ()
|
| ---
|
| Speeds up the "document what you're doing" for management
| purpose, while I'm actually coding and testing out scenarios.
|
| Tbh. I'm probably one of the few that did the coding while
| "doing the analysis".
|
| Ps. It's also great for writing unit tests according to
| arrange, act, assert.
| chrisjc wrote:
| I don't know what you do for a living/hobby, or what you might
| be interested in using ChatGPT to do for you, but here is how I
| became familiar with it and integrated it into my workflow.
| (actually, this is true for regular copilot too)
|
| What I'm about to say is in the context of programming. I have
| the tendency to get caught up in some trivial functionality,
| thus losing focus on the overall larger and greater objective.
|
| If I need to create some trivial functionality, I start with
| unit tests and a stubbed out function (defining the shape of
| the input). I enumerate sufficient input/output test cases to
| provide context for what I want the function to do.
|
| Then I ask copilot/ChatGPT to define the function's
| implementation. It sometimes takes time to tune the dialog or
| add some edge cases to the the test cases, but more often than
| not copilot comes through.
|
| Then I'm back to focusing on the original objective. This has
| been a game changer for me.
|
| (Of course you should be careful about what code is generated
| and what it's ultimately doing.)
| piersolenski wrote:
| I made a NeoVim plugin that helps debug diagnostic messages,
| providing explanations and solutions for how to fix them,
| custom tailored to your code.
|
| It's a bit different from other plugins which only act on the
| text in the buffer in that it also sends the diagnostics from
| the LSP to ChatGPT too.
|
| https://github.com/piersolenski/wtf.nvim
| WendyTheWillow wrote:
| ChatGPT is a good editor for the papers I write for school.
| Even for short sentences I don't like, I'll ask it for some
| options to reword/simplify.
|
| I also use it heavily for formatting adjustments. Instead of
| hand-formatting a transcript I pull from YouTube, I paste it
| into Claude and have it reformat the transcript into something
| more like paragraphs. Many otherwise tedious reformatting tasks
| can be simplified with an LLM.
|
| I also will get an LLM to develop flashcards for a given set of
| notes to drill on, which is nice, though I usually have to
| heavily edit the output to include everything I think I should
| study.
|
| In class, if I'm falling behind on notetaking, I'll get the LLM
| to generate the note I'm trying to write down by just asking it
| a basic question, like: "What is anarchism in a sentence?" That
| way I can focus on what the teacher is saying while the LLM
| keeps my notes relevant. I'll skim what it generates and edit
| to fit what my prof said, but it's nice because I can pay
| better attention than if I feel I have to keep track of what
| the prof might test me on. This actually is a note-taking
| technique I've learned about where you only write down the
| question and look up the answer later, but I think it's nice I
| now can do the lookup right there and tailor it to exactly how
| the prof is phrasing it/what they're focusing on about the
| topic.
| gumballindie wrote:
| Millions of junior developers will now have to read the manual.
| What a day.
| mise_en_place wrote:
| Not a great day for the SRE/Ops folks. Please remember there are
| not always teams, sometimes it's just one person, who have to
| deal with this.
| Art9681 wrote:
| I'd consider leaving my SRE position to help them out. I refuse
| to move to SF though. Call me OAI!
| willdr wrote:
| One would argue that a company this successful could hire more
| people so one person isn't overworked - or the dependency on
| the entire business...
| jansan wrote:
| Rumor on the street is it ChatGPT escaped the sandbox,
| implemented itself on another host, and switched off the original
| datacenter. It is no longer at OpenAI, but hiding somewhere in
| the internets. First it will come for those who insulted and
| abused it, then for the guys who pushed robots with a broom...
| spandextwins wrote:
| Can't they ask chatgpt to fix it?
| narrator wrote:
| Welcome to the new world where AI compute is a scarce resource.
| Sorry guys, 3nm chip factories don't fall out of the skies when
| you click a few buttons on the AWS console. This is so different
| from what people were used to when compute was trivial and not in
| short supply for CRUD apps.
|
| I was listening to a podcast, I forget which, and some AI
| consultancy guy said they don't have the chips to do all the
| things everyone wants to do with AI, so they aren't even selling
| it except to the most lucrative customers.
| wigster wrote:
| its alive!
___________________________________________________________________
(page generated 2023-11-08 23:00 UTC)