[HN Gopher] OpenAI: Start using ChatGPT instantly
___________________________________________________________________
OpenAI: Start using ChatGPT instantly
Author : Josely
Score : 95 points
Date : 2024-04-01 17:14 UTC (5 hours ago)
(HTM) web link (openai.com)
(TXT) w3m dump (openai.com)
| minimaxir wrote:
| This is one hell of a loss leader to keep people from using
| competitors, especially since it appears that it's not being used
| to upsell to a paid offering.
|
| This can't be sustainable even with all the inference
| optimizations in the world.
| toomuchtodo wrote:
| > We may use what you provide to ChatGPT to improve our models
| for everyone.
|
| I believe OpenAI has elected to eat the compute cost in order
| to teach the model faster to stay ahead of competitors. How
| much are you willing to pay as the robot gets better faster?
| Everyone fighting over steepness of the hockey stick
| trajectory.
|
| Anyone can buy GPUs, you can't buy human attribution training.
| You need human prompt and response cycle for that.
| moralestapia wrote:
| I agree, GPT-4 is still the undisputed king in LLMs, it's
| been a wee bit more than a year since it came out. I'm sure
| that the quality of GPT-5 will depend more on a carefully
| curated training set than just an increase in their dataset
| size, and I think they're really good at that.
|
| Also, very few people know as much as sama about making
| startups grow, so ...
|
| PS. I'm not even a fan of "Open"AI, but it is what it is.
| moffkalast wrote:
| It's more of a disputed king these days, lots of benchmarks
| show Claude Opus ahead and a fair few people do prefer it.
| happypumpkin wrote:
| Yup, Claude is my go-to now. Both models seem equally
| "intelligent" but Claude has a better sense of when to be
| terse or verbose + the responses seem more natural. I
| still use GPT4 when I need code executed as Anthropic
| hasn't implemented that yet (though this "feature" can be
| annoying as some prompts to the GPT4 web interface result
| in code being executed when I just wanted it to be
| displayed).
| jiriro wrote:
| No joy:-(
|
| Unfortunately, Claude.ai is only available in certain
| regions right now. We're working hard to expand to other
| regions soon.
| dontupvoteme wrote:
| ironically the API is available most everywhere, and it
| has an ok-ish web interface (they're _really_ heavy on
| prompt engineering it seems though)
| declaredapple wrote:
| It's going to be GPT-3.5turbo not GPT4. As others have
| mentioned, binggpt/bard are also free.
|
| These smaller models are a relatively cheap to run, especially
| in high batches.
|
| I'm sure there will also be aggressive rate limits.
| observationist wrote:
| If the value of user interactions exceeds the cost of compute,
| then this is an easy decision.
|
| They apparently constrained this publicly available version,
| with no gpt-4 or dall-e, and a more limited range of topics and
| uses than if you sign up.
|
| They do explicitly recommend upgrading: We've
| also introduced additional content safeguards for this
| experience, such as blocking prompts and generations in a wider
| range of categories. There are many benefits to
| creating an account including the ability to save and review
| your chat history, share chats, and unlock additional features
| like voice conversations and custom instructions.
| For anyone that has been curious about AI's potential but
| didn't want to go through the steps to set-up an account, start
| using ChatGPT today.
| HarHarVeryFunny wrote:
| I think OpenAI are running scared of Anthropic (who are moving
| way faster than they are). The last half dozen things they have
| said all seem to point to this.
|
| "New model coming soonish" (Sure, so where is it?)
|
| "GPT-4 kind of sucks" (Altman seemed to like it better before
| Athropic beat it)
|
| "[To Lex Fridman:] We don't want our next model to be
| _shockingly_ good " (Really ?)
|
| "Microsoft/OpenAI building $100B StarGate supercomputer"
| (Convenient timing for rumor, after Anthopic's partner Amazon
| already announced $100B+ plans)
|
| "ChatGPT is free" (Thanks Claude!)
| epups wrote:
| How is Anthropic moving so fast if it took them almost a year
| to produce a better model? And they have probably 1% of the
| market right now.
| HarHarVeryFunny wrote:
| Anthropic as a company only was created after GPT-3 (Dario
| Amodei's name is even on the GPT-3 paper).
|
| So, in same time OpenAI have gone from GPT-3 to GPT-4,
| Anthropic have gone from startup to Claude-1 to Claude-2 to
| Claude-3 which _beats_ GPT-4 !
|
| It's not just Anthropic having three releases in time it
| took OpenAI to have one, but also that they did so from a
| standing start in terms of developers, infrastructure,
| training data, etc. OpenAI had everything in place as they
| continued from GPT-3 to GPT-4.
| wavemode wrote:
| I had a funny thought when Anthropic was still a new
| startup. I was browsing their careers page and noticed:
|
| 1. They state upfront the salary expectations for all
| their positions 2. The salaries offered are quite high
|
| and I immediately decided this was probably the company
| to bet on, just by virtue of them probably being able to
| attract and retain the best talent, and thus engineer the
| best product.
|
| I contrasted it with so many startups I've seen and
| worked at that try their damnedest to underpay everyone,
| thus their engineers were mediocre and their product was
| built like trash.
| vhiremath4 wrote:
| Agreed with the spirit of this post. OpenAI also pays
| very well though and has super high caliber talent (from
| what I can see from friends who have joined and other
| online anecdotes).
| manishsharan wrote:
| Every paying customer Anthropic gains is a paying customer
| that OpenAI loses. The early adopters of OpenAI are the
| ones now becoming early adopters of Claude 3. Also 200k
| context window is a big deal .
| xetsilon wrote:
| I don't completely disagree with you but personally,
| Claude 3 doesn't seem like a big enough upgrade to get me
| to switch yet.
|
| I have also personally found that minimizing the context
| window gives the best results for what I want. It seems
| like extra context hurts as often as it helps.
|
| As much as I hate to admit it too but there is a small
| part of me that feels like chatGPT4 is my friend and
| giving up access to my friend to save $20 a month is
| unthinkable. That is why Claude needs to be quite a big
| upgrade to get me to pay for both for a time.
| manishsharan wrote:
| I kind of feel the same way. I have a lot of chats in
| Chatgpt .. I have also been using it as a sort of diary
| of all my work . The Chatgpt app with voice is my
| personal historian!
|
| However once I figure out a way to download all my chats
| from Chatgpt, I think Claude' 200k context window may
| entice me to rethink my Chatgpt subscription.
| ben_w wrote:
| > "[To Lex Fridman:] We don't want our next model to be
| shockingly good" (Really ?)
|
| Yes, really.
|
| They're strongly influenced by Yudkowsky constantly telling
| everyone who will listen that we only get one chance to make
| a friendly ASI, and that we don't have the faintest idea what
| friendly even means yet.
|
| While you may disagree with, perhaps even mock, Yudkowsky --
| FWIW, I am significantly more optimistic than Yudkowsky on
| several different axies of AI safety, so while his P(doom) is
| close to 1, mine is around 0.05 -- this is consistent with
| their initial attempt to not release the GPT-2 weights at
| all, to experiment with RLHF in the first place, to red-team
| GPT-4 before release, asking for models at or above GPT-4
| level to be restricted by law, and their "superalignment"
| project: https://openai.com/blog/introducing-superalignment
|
| If OpenAI produces a model which "shocks" people with how
| good it is, that drives the exact race dynamics which they
| (and many of the people they respect) have repeatedly said
| would be bad.
| thorncorona wrote:
| influence from yudkowsky is surprising, considering if
| you've ever touched hpmor, you'd realize the dude is a
| moron.
|
| re p(doom): the latest slatestarcodex[0] has a great little
| blurb about the difficulties of applying bayes to hard
| problems because there's too many different underlying
| priors which perturb the final number, so you end up
| fudging it until it matches your intuition.
|
| [0] https://www.astralcodexten.com/p/practically-a-book-
| review-r...
| ben_w wrote:
| I find it curious how many people _severely_ downrate the
| intelligence of others: to even write a coherent text the
| length of HPMOR -- regardless of how you feel about the
| plot points or if you regard it as derivative because of,
| well, being self-insert Marty Stu /Harry Potter fanfic[0]
| -- requires one to be significantly higher than "moron",
| or even "normal", level.
|
| Edit: I can't quite put this into a coherent form, but
| this vibes with Gell-Mann amnesia, with the way LLMs are
| dismissed, and with how G W Bush was seen (in the UK at
| least).
|
| Ironically, a similar (though not identical) point about
| Bayes was made by one of the characters _in_ HPMOR...
|
| [0] Also the whole bit of HPMOR in Azkaban grated on me
| for some reason, and I also think Yudkowsky re-designed
| Dementors due to wildly failing to understand how
| depression works; however I'm now getting wildly side-
| tracked...
|
| Oh, and in case you were wondering, my long-term habit of
| footnotes is somewhat inspired by a different fantasy
| author, one who put the "anthro" into "anthropomorphic
| personification of death". I miss Pratchett.
| thorncorona wrote:
| If you think that volume is a replacement for quality,
| you really need to read more. If you think volume is
| correlated with intelligence, you really need to read and
| write more.
|
| In case you still don't believe me, you are welcome to
| hop onto your favorite fan fiction site, such as AO3 [0],
| and search for stories over [large k] words.
|
| [0] https://archiveofourown.org/works/search?work_search%
| 5Bquery...
| smikesix wrote:
| they regulary and often downgrade their gpts. gpt4 now is about
| as good as gpt3.5 was in the beginning.
|
| like 6-7 months ago there was a gpt4 version that was really
| good, it could understand context and stuff extremly well, but
| it just went downhill from there. i wont pay for current
| chatgpt 4 anymore
| happypumpkin wrote:
| While I agree that GPT4 (in the web app) is not as good as it
| used to be, I don't think it is anywhere near GPT3.5 level.
| There are many things web app GPT4 can do that GPT3.5
| couldn't do at ChatGPT's release (or now afaik).
| jacooper wrote:
| Bing/Copilot works without an account too, it's just very
| limited and it will ask you to sign in way too often for it to
| be useful, openai will probably do the same thing.
| ldjkfkdsjnv wrote:
| OpenAI has a big problem, the only moat on an LLM is the quality
| and inference speed. Which is quickly getting commoditized.
| arthurcolle wrote:
| Yep, Groq is one example
| BoorishBears wrote:
| Great example: can't scale enough to remove their 30 requests
| per minute (!) rate limit and enable billing, barely meets
| 3.5 levels of intelligence, etc;
|
| People don't seem to understand that scaling out LLMs
| efficiently is it's own art that OpenAI is probably learning
| lessons on faster than anyone else
| kaibee wrote:
| You're thinking of Grok. Not Groq. Blame Elon for this
| naming catastrophe.
| BoorishBears wrote:
| Lol what? No, I'm thinking of Groq.
|
| I love Groq though, every time someone hypes up their
| project using it you instantly know they have no real
| usage whatsoever.
|
| Even the most "toyish" toy project doesn't fit in their
| rate limits for anything more than personal use.
| snapcaster wrote:
| Is it? I thought this would be the case by now but GPT-4 is
| still in the lead (and released a while ago). How many
| generations in a row does OpenAI have to dominate before we
| revisit the "there is no moat" idea?
| ldjkfkdsjnv wrote:
| Anthropic Opus is better than GPT 4 on many (most?) tasks.
| HarHarVeryFunny wrote:
| Not only better as measured by benchmarks, but also better
| preferred by humans.
|
| https://huggingface.co/spaces/lmsys/chatbot-arena-
| leaderboar...
| ThunderBee wrote:
| The most surprising thing to me is that Opus is only
| slightly in the lead.
|
| I was feeding multiple python and c# coding challenges /
| questions to both and Opus blew GPT4 out of the water on
| every single task. Didn't matter if I was giving them 50
| lines or 5,000 Opus would consistently give
| working/correct solutions while GPT4 preferred to answer
| with pseudo code, half complete code with 'do the thing
| here' comments, or would just tell me that it's too
| complicated.
| happypumpkin wrote:
| Another data point, I definitely find Opus better for
| coding, but not by much. The problems I give them are
| generally short (<= 100 lines) and well-defined so any
| advantage Opus has in larger contexts won't be apparent
| to me. They're also generally novel problems but NOT
| particularly challenging (anyone with a BS CS should be
| able to solve them in < 1hr).
|
| I have them working with mostly C++ and Clojure, a bit of
| Python, and Vimscript every once in a while. Both models
| are much better at Python and fairly bad at Vimscript.
| Clojure failure cases are mostly from invented functions
| and being bad at modifying existing code. I can't pick
| out a strong pattern in how they fail with C++, but there
| have been a few times where GPT4 ends up looping between
| the same couple unworkable solutions (maybe this
| indicates a poor understanding of prior context?).
| xetsilon wrote:
| Spot on. People need to say what they are actually using
| the models for and not just "coding".
|
| I mostly use it to make react/javascript front ends to a
| python/fastapi backend and chatGPT4 is great at that.
|
| I tried to write a piece of music though in the old
| Csound programming language and it barely even works.
|
| It will be interesting to see how the context plays out
| because I have noticed that I can often give it extra
| context that I think will be helpful but end up causing
| it to go down a wrong path. I might even say my best
| results have been from the most precise instructions
| inside the smallest possible context.
| dontupvoteme wrote:
| Yeah GPT is incredibly lazy, ironically 3 is far better
| at not being lazy than 4.
|
| I guess you benchmarked via API? I've heard even the
| datestamped models have been nerfed from time to time..
| theaussiestew wrote:
| It's because LMSYS is an aggregate elo across a range of
| different tasks. Individually in some very important
| areas, Claude Opus may be better than GPT-4 by 50-100 elo
| points which is quite a lot. However there are specific
| domains where GPT-4 has the advantage because it's been
| fine tuned based off a lot of existing usage. So weak
| points around logic puzzles or specific instructions
| don't bring down its elo whereas Claude Opus doesn't have
| this advantage yet. I believe Opus's eventual elo, after
| all these little areas of weakness are fine tuned, will
| be something like 1300.
| strikelaserclaw wrote:
| i've been a paid consumer of chatgpt - 4 for a while and i've
| been recently using gemini more and more, its free and while
| not as good as gpt-4, it is good enough where paying for
| gpt-4 doesn't seem worth it anymore.
| ben_w wrote:
| We don't talk about Google "not having a moat" since their
| patents on PageRank expired.
|
| All the talk of OpenAI's moats (or lack thereof) since the
| memo, seems like humans being stochastic parrots.
| ldjkfkdsjnv wrote:
| Google has a data moat on traditional search. Its really hard
| to train a traditional search system without google scale
| data. LLMs effectively side stepped that whole issue
| ben_w wrote:
| Google's special sauce, is the data of all user's responses
| to the things Google's algorithm shows them.
|
| This description also works for ChatGPT, what it has which
| others do not (at anything close to the same scale).
|
| Google's search engine is a well understood bit of matrix
| multiplication on a scale many others can easily afford to
| replicate.
|
| This description also works for GPT-3.
| ldjkfkdsjnv wrote:
| No becuause ChatGPT was trained on the whole internet
| using a new algorithm. You needed perfect tagged data for
| the old algorithm (page rank). LLMs are a fundamental
| leap forward
| ben_w wrote:
| The GPT algorithm isn't a secret.
|
| The RLFH data -- users giving a thumbs up or down, or
| regenerating a response, or even getting detectably angry
| in their subsequent messages -- is.
|
| PageRank isn't a secret.
|
| Google's version of RLFH -- which link does a user click
| on, do they ignore the results without clicking any, do
| they write several related queries before clicking a
| link, do they return to the search results soon after
| clicking a link -- are also secret.
|
| That the Transformer model is a breakthrough doesn't make
| it a moat; that the Transformer model is public doesn't
| mean people using it don't have a moat.
|
| Hence why I'm criticising the use of "moat" as mere
| parroting.
| rchaud wrote:
| Google's acquisitions are its moat. Google Search is only one
| small part of that
|
| Google wouldn't be the company it is if it didn't also
| control Youtube, Gmail, Android and Chrome.
| mcbuilder wrote:
| Turboderp released a custom quantized version of dbrx-base and
| instruct a few days after the official model landed.
| https://huggingface.co/turboderp/dbrx-base-exl2
|
| You are right, LLMs weights are fast getting commoditized. As
| fast as compute is scaling now, I don't think it can continue
| forever, so the huge advantages big players have is going to
| get wiped. I'm sure they will always be able to deliver at
| scale, but a super GPT4 performing model available on your
| local (or small player type) hardware in the next few years
| seems more likely than a few big mega-players and mega-models.
| xpe wrote:
| The comment above seems to be interpreting OpenAI's goals in a
| way that is not consistent with its charter.
|
| Structurally, for OpenAI, capped profit is a means not the end.
| If the capped profit part of OpenAI is not acting in alignment
| with its mission, it is the OpenAI board's responsibility to
| rein it in.
|
| --
|
| https://openai.com/charter
|
| OpenAI's mission is to ensure that artificial general
| intelligence (AGI)--by which we mean highly autonomous systems
| that outperform humans at most economically valuable work--
| benefits all of humanity. We will attempt to directly build
| safe and beneficial AGI, but will also consider our mission
| fulfilled if our work aids others to achieve this outcome. To
| that end, we commit to the following principles:
|
| Broadly distributed benefits
|
| ...
|
| Long-term safety
|
| ...
|
| Technical leadership
|
| ...
|
| Cooperative orientation
| supposemaybe wrote:
| There's no such thing as a free lunch.
| bongodongobob wrote:
| Yup, in exchange they get their models trained by even more
| people.
| baal80spam wrote:
| Kind of win-win situation.
| frankfrank13 wrote:
| Makes sense, since Bard + Bing are free and don't require sign in
| Galanwe wrote:
| April's fool?
| appel wrote:
| I'll never understand companies making genuine product
| announcements on April 1st, the most famous example of this has
| to be Gmail. I mean, really? You just have to do it on this
| day? Couldn't just do it yesterday or tomorrow, or any of the
| 364 other days in the year?
| ViktorRay wrote:
| The gmail one is a classic though. It was an excellent way to
| bring awareness. It seems to have worked out for them too
| amelius wrote:
| Well, the "beta" tag turned out to be a joke.
| Hoasi wrote:
| Companies coming up with outlandish claims on April 1st can
| look fun or enticing, and that's great PR. Otherwise, when a
| company does it because it's April 1st or worse, the CEO is
| the only one who thinks it's fun...it's just cringe.
| bilsbie wrote:
| On the other hand it could be a great way to float wild
| product ideas and see if there's interest.
|
| I'm surprised no companies try that.
| ViktorRay wrote:
| I believe OpenAI is doing this because they are scared of
| Microsoft.
|
| More specifically.... Bing Chat is free and you can converse with
| it without using an account for a few prompts at a time.
|
| Just go into Incognito into your browser if you run into any
| limits on Bing Chat.
|
| Bing Chat uses OpenAI tech but OpenAI doesn't make money from it.
| So OpenAI is probably worried people will use Bing more.
| Interesting kind of relationship they got going on with
| Microsoft. They need to provide Microsoft with the tech to access
| Microsoft data centers but this leaves them the risk of Microsoft
| overtaking them in the AI space with their own tech itself.
|
| Fascinating
| koolba wrote:
| I bet they're doing this to preemptively get around any future
| laws that prevent young people from signing up.
|
| No sign up required means no age verification either.
| diggan wrote:
| Why would young people be forbidden from using LLMs in the
| future?
| ben_w wrote:
| Age-inappropriate responses, by whatever that means to
| whoever sets the rules.
| jiayo wrote:
| Social media has been the boogeyman for almost a decade now,
| and (red) states are at least sending out trial balloons
| regarding banning minors from accessing social media[1].
| Prior to that, it was porn, and now we have age verification
| required by law.
|
| I'd argue that AI is a much bigger boogeyman than
| Instagram/Tiktok/Pornhub ever was.
|
| [1] No judgement of whether this is a good idea or not; in
| some sense it probably is; but I feel the current discourse
| is reactionary/political and not really about actual people's
| actual well being.
| Cheer2171 wrote:
| > No sign up required means no age verification either.
|
| That is an absurd conclusion to speculatively leap to, and
| wrong. Typically age-gating laws are written so that it doesn't
| matter if you have to create an account or not. Porn has always
| been the most common case for this kind of legislation. Most
| people don't create accounts to watch porn, and most porn is
| free without needing to sign up. The jurisdictions that require
| age verification still apply to porn sites where you don't need
| to sign up.
| Hoasi wrote:
| OpenAI starts looking slightly desperate.
|
| No surprise here.
| yinser wrote:
| There is still a huge volume of people who haven't used a large
| language model, and haven't had their own "aha" moment. Getting
| your core functionality available on the front door is good
| marketing and they have the funding to burn GPT-3.5 tokens.
| lxgr wrote:
| I know what you mean, but in terms of phrasing, one might even
| say they have a way of cheaply creating more GPT-3.5 tokens if
| they ever run out :)
| HarHarVeryFunny wrote:
| It seems most of these companies are increasingly using
| synthetic data (use one generation of LLM to generate
| specific types of training data for the next) rather than
| just looking for more/better human generated data.
| pixiemaster wrote:
| my interpretation: MAU are going down rapidly and they need a KPI
| to counter that
| brcmthrowaway wrote:
| This. It was a nice toy, but people moved on when they realized
| it couldnt do useful valuable work
| dieselgate wrote:
| I agree with this as a professional but aren't many students
| likely still using LLMs?
| dvt wrote:
| I think more and more people are slowly realizing that LLMs
| aren't products in themselves. At the end of the day, you need to
| do something like Midjourney or Copilot where there's some value
| generation. Your business can't _just_ be "throw more GPUs at
| more data" because the guy down the block can do the same thing.
| OpenAI never had any moat, and it's a bit telling that as early
| as last year, everyone on HN acted as if they did.
|
| LLMs are like touch screens: technically interesting and with
| great upside potential. But until the iPhone, multi-touch, the
| app ecosystem comes along, they'll remain an interesting
| technological advancement (nothing more, nothing less).
|
| What I'm also noticing is that very little effort (and money) is
| actually spent on value generation, and most is spent on pure
| bloat. LangChain raising $25m for what is essentially a library
| for string manipulation is crazy. (N.B. I'm not solely calling
| out LangChain here, there are dozens of startups that have raised
| hundreds of millions on what is essentially AI tooling/bloat.) We
| need apps--real, actual apps--that leverage LLMs where it
| matters: natural-language interaction. Not pipelines upon
| pipelines upon pipelines or larger and larger models that are fun
| to chat with but actually _do_ basically nothing.
| jedberg wrote:
| 80% of people just want the ability to write a sentence and
| have it run the correct SQL on their data warehouse. That's the
| biggest use of LLMs I see in the enterprise right now.
|
| I think you are right -- eventually we will see an ecosystem
| pop up that uses LLMs in unique ways that are only possible
| with LLMs.
|
| But in the meantime people are using LLMs to shortcut what used
| to be long processes done by humans, and frankly, there is a
| lot of value in that.
| dvt wrote:
| > 80% of people just want the ability to write a sentence and
| have it run the correct SQL on their data warehouse.
|
| Even this is IMO too ambitious (due to edge cases, and multi-
| shot prompting being basically required, which kind of
| defeats the purpose). Right now, I'm working on a product
| that can just do simple OS stuff (and even that is quite
| challenging); e.g. "replace all instances of `David` with
| `Bob` in all text files in this folder".
| namanyayg wrote:
| I agree with your points about LLMs being like touch screens
| but what kind of natural language interaction apps are you
| talking about? Might be that some startup has attempted
| something similar already.
| digging wrote:
| I'm a different person but the key for me is a wrapper than
| can handle uncertainty and ask questions for clarification.
|
| Currently I don't find LLMs tremendously useful because I
| have to be extremely specific with what I want, which IMO is
| closer to writing code than the magical promise of turning
| ideas into products.
|
| An app that I can just talk into and say, "I want to do X"
| and it can start building X while asking me questions for
| functional requirements, edge cases, UI/UX, that's a killer
| app.
|
| It's also an app which could _actually_ decimate many SWE
| jobs, so, I should be careful what I wish for.
| Abecid wrote:
| Reminds me of the crypto bubble. All the defi shitcoins giving
| yields to each other while no shitcoin actually generated real
| value or had real demand. Now everyone building LLM libraries
| and systems but very few LLM products yield real value
| rchaud wrote:
| There's a big difference cost-wise: anybody can deploy an
| Ethereum fork, mine the resultant shitcoin very cheaply and
| start a Discord for marketing purposes. That's the whole
| product right there.
|
| The same is not true of AI projects which require a lot more
| upfront investment.
| minimaxir wrote:
| The LangChain fundraise was a year ago, the same time the
| ChatGPT API was released and there was much more optimism about
| generative AI.
|
| Things have cooled a bit more nowadays as the cost economics
| became more real. I wouldn't expect another LangChain level
| fund raise.
| rchaud wrote:
| The need for concrete products is acute. I'm in the martech
| space and it's ridiculous how many big-name software vendors
| have done nothing besides slap an "AI-powered" label on their
| traditional products (CRM + email marketing and API-connector
| type apps).
|
| Tried talking to the reps about the AI features. All were very
| cagey, suggesting that they didn't really know and were
| parroting buzzwordy benefits because the higher-ups told them
| to. So far, the "AI revolution" seems to have just led to
| higher price tiers.
| parineum wrote:
| > The need for concrete products is acute.
|
| This feels very crypto-like in this regard for me. Businesses
| seem to be struggling with what we're all struggling with,
| LLMs are untrustworthy.
| rchaud wrote:
| Business' needs are clear. Handle payroll, handle data
| analysis, conduct sentiment analysis of our brand across
| social media, identify problems. Do this effectively and at
| a far lower price than what a human would agree to work
| for.
|
| AI isn't moving the needle much here, which is why GenAI
| "solutions" are underwhelming things like support chatbots,
| writing assistance and auto-generated stock art for
| Facebook ads.
|
| There's nothing wrong with doing that a service but it
| falls well short of the fantastical visions that Microsoft,
| OAI and SV at large have portrayed.
| mg wrote:
| I was waiting for this.
|
| Google showed that providing an input field in which people can
| put their questions and then display ads next to the answers is
| one of the best business models ever. If not _the_ best business
| model ever.
|
| Google was free, open and without ads in the beginning. It is
| just too tempting to become the Google of the AI era to not try
| and replicate this story.
|
| Microsoft is trying it too, but for some reason, they do not
| provide the simple, clean user interface which made Google
| successful. I always wondered, why they didn't do that with Bing
| search. And now with Bing copilot they chosen the same strange
| route of having a colorful and somewhat confusing interface.
|
| Let's see how OpenAI does.
| rchaud wrote:
| Google.com's "clean UI" differentiator gave it a first-mover
| advantage compared to the average 90s search engine, but that's
| as far as the UI impact goes.
|
| From 2007 onwards, Google has maintained its dominance through
| decidedly non-search channels: the primacy of Youtube, the
| Chrome browser, Android OS, and of course, paying Apple
| billions to make Google its default search engine.
| mg wrote:
| How do you explain that Bing has only 10% of the search
| market share on the desktop?
|
| Windows has over 70% market share on the desktop. Windows
| comes with Edge as the default browser and Edge has Bing as
| the default search engine.
|
| But Edge only has 12% of the Desktop browser market share.
|
| It looks like 6 out of 7 windows users switch to Chrome for
| some reason. If not for the cleaner interface - why?
| rchaud wrote:
| > How do you explain that Bing has only 10% of the search
| market share on the desktop?
|
| It's because downloading Chrome is the first piece of
| advice anyone receives when setting up a new computer,
| Windows or Mac.
| andybak wrote:
| Advice from whom?
| the_snooze wrote:
| >It looks like 6 out of 7 windows users switch to Chrome
| for some reason. If not for the cleaner interface - why?
|
| They don't switch to Chrome. They're _already using
| Chrome_. And odds are, they probably have been since the
| early 2010s, if not earlier---long before Edge was a thing.
|
| When they get a new computer, they install Chrome because
| they're already in that ecosystem: bookmarks, saved
| passwords, customization, Google accounts, familiarity.
| They won't suddenly use Edge because it has none of that.
| HPMOR wrote:
| I literally only use chrome on windows because of the
| cleaner interface.
| MisterBastahrd wrote:
| Because when IE was utter garbage, Chrome had better
| performance, an ecosystem that included GMail, and also
| stored our passwords and bookmarks. Chrome also eventually
| allowed you to conduct google searches directly from the
| address bar. People use what they are comfortable with, and
| all the functionality built into Chrome by google is a HUGE
| sunk cost to bypass. The only person I know who uses Edge
| and Bing regularly does it to earn gift cards from
| Microsoft.
| iamthirsty wrote:
| Also, Gmail.
| dinobones wrote:
| All of the 25 comments so far are missing the point.
|
| Surprisingly, YC's greater opinion is still "ChatGPT is a useless
| stochastic parrot." probably because they are too cheap to fork
| over $20 to try GPT4.
|
| Yes, GPT3.5 is mostly a toy. Yes, it hallucinates a non-trivial
| amount of time. But IMO, GPT4 is in a completely different class,
| and has almost entirely replaced search for me.
|
| If OpenAI really wants ChatGPT to challenge search, it has to be
| free and accessible without requiring a sign up.
|
| I very rarely use any search engine now. Really I only use search
| when I'm looking for reddit threads or a specific place in Google
| Maps.
|
| All of my other queries: how things work, history, how to setup
| my wacom, unit conversions, calculating mortgage, explain stdlib
| with examples, and so on... All of that goes to ChatGPT. It's a
| million times faster and more efficient than scrolling through
| endless SEO blog spam and overly verbose Medium articles.
|
| This update makes ChatGPT3.5 available without sign up, not
| ChatGPT 4. But if/when ChatGPT4 becomes available without sign
| up, I have no doubt the rest of the population will experience
| the same lightbulb moment I did, and it will replace search for
| them as well.
| rchaud wrote:
| > probably because they are to cheap to fork over $20 to try
| GPT4.
|
| Or because GPT 3.5 was hyped to the skies by all and sundry,
| and those who were convinced enough to use it still found it
| lacking. Many like you are now saying "oh yeah GPT 3.5 was
| awful, but _this_ really is the future ".
|
| Not everybody wants to have the quality of their work dependent
| on the whims of an OAI product manager. If GPT-4 is as good as
| claimed, then it will find its way into my workflow. For now,
| AI claims are treated as fiction unless accompanied with a
| JSFiddle-style code example...too much snake oil in the water
| to do otherwise.
| ChikkaChiChi wrote:
| Bard already integrates with Google Search. These companies
| aren't competing to be the best; just the most good enough in
| existing form factors.
| mdp2021 wrote:
| > _mostly ... but_
|
| I have been busy on different matters:
|
| in absence (I presume) of a model of their model of reasoning,
|
| has some metrics, some measurement, be produced to understand
| the deep reliability (e.g. absence of hallucination, logical
| consistency etc.) of LLMs?
| hagbard_c wrote:
| Now let's see the results of having one of these bots debate
| another one, a verbal version of Robot Wars (BattleBots on the
| western side of the Atlantic). Gentlemen, prepare your chatbots!
| kleiba wrote:
| _We may use what you provide to ChatGPT to improve our models for
| everyone. If you'd like, you can turn this off through your
| Settings._
|
| Please correct me if I'm wrong, but I do not think this can be
| GDPR compliant: there's a good chance people might enter personal
| information and in that case, OpenAI cannot just use said data
| without _explicit_ consent for their own purposes. And the
| keyword here is "explicit" - just saying "by using ChatGPT, you
| automatically agree to your data being used by us - just don't
| use it if you don't agree, or turn it off in the settings" does
| not work.
| wewtyflakes wrote:
| It does not look like this is rolled out to Europe, or at least
| not broadly. (Tested using a VPN to France)
| nico wrote:
| > Starting today, you can use ChatGPT instantly, without needing
| to sign-up
|
| It's actually been up since at least Thursday of last week, maybe
| longer
|
| I wanted to show ChatGPT to someone who hadn't used it before,
| they went to chat.openai.com and were able to use it without
| creating an account
| xigoi wrote:
| Wait, so this is not an April Fools joke?
| CSMastermind wrote:
| > We've also introduced additional content safeguards for this
| experience, such as blocking prompts and generations in a wider
| range of categories.
|
| Oh good, just what wasn't needed.
| cft wrote:
| Is this for no-account users, or for everyone, including _paid_
| users?
|
| In the past, I thought that search engines censored content
| because of the advertiser's demands, but now that AI search
| switched to the paid model I think the censorship is purely
| ideological because these companies are based in San Francisco.
| ethanbond wrote:
| So if you picked up the building and put it elsewhere it'd
| produce the same product but different politics?
| Spivak wrote:
| I get that this sucks but what's the alternative? It would take
| legislation that will never pass to eliminate OpenAI's
| liability for the content that their model spits out and
| they're already under a microscope by regulators and the
| public.
| not2b wrote:
| This doesn't have anything to do with their liability or
| regulations, it's what their paying customers want. A company
| that pays them to provide a chatbot for their business
| doesn't want the chatbot to say controversial things or tell
| people to commit suicide.
| bilbo0s wrote:
| Actually if it's about customers not paying them if they
| spit out "kill yourself" content, then I 100% understand
| that.
|
| These are businesses. Those servers cost money. That
| compute costs even more. The AI experts, ( _real_ AI
| experts by the way, not tensorflow and pytorch monkeys),
| cost big money as well. Someone better be paying for all
| that.
|
| So if the people who want offensive content are willing to
| pony up the dollars, then great. I've got no problem with
| giving them what they want. But if they want Barbie or
| Cocomelon to pay for the offensive content, then yeah, they
| can be safely ignored. Block as much as you like. (Or
| rather, block as much as Barbie wants blocked. Which is
| probably more than you like, but they're paying you
| handsomely for it.)
| Y_Y wrote:
| I may just be a pytorch monkey, but I cost more than a
| training server.
| nextaccountic wrote:
| The alternative is running models locally
| francescovv wrote:
| Alternative would be to have model itself aware of sensitive
| topics and and meaningfully engage with questions touching
| such topics with that awareness.
|
| It might be a while till that is feasible, though. Until
| then, "content safeguards" will continue to feel like
| overreaching, artificial stonewalls scattered across
| otherwise kinda-consistent space.
| NEETPILLED wrote:
| The alternative is to stop giving a pass to censorship
| obviously?
| pixl97 wrote:
| You obviously don't understand what liability means.
| littlestymaar wrote:
| That's cool: it means that I can now log off OpenAI and that
| means that under GDPR OpenAI now has no right to keep any of the
| data I'm feeding to ChatGPT.
| jcbuitre wrote:
| The ability to use them anonymously used to be my only criteria
| for interacting with an llm operated by a third party.
|
| My anonymity for your free training data seemed a reasonable
| barter.
|
| It's the reason phind is really the only one I use at all.
|
| But gpt4's parent company has proven itself to thinking it is
| above the law, or even good faith morality, that I refuse to
| provide them any more data than the shit they already stole from
| me and my fellow creatives.
|
| May thy api chip and shatter.
| makach wrote:
| This is the way.
| sylware wrote:
| Failure: missing noscript/basic (x)html interop
| tailspin2019 wrote:
| > We may use what you provide to ChatGPT to improve our models
| for everyone. If you'd like, you can turn this off through your
| Settings - whether you create an account or not.
|
| A quick moan about this:
|
| As a long-time _paying_ subscriber of ChatGPT (and their API), I
| am extremely frustrated that the "Chat history & training"
| toggle still bundles together those two _unrelated concepts_. In
| order for me to not have my data used, I have to cripple the
| product (the one I 'm paying for) for myself.
|
| It's great that they're making the product available to more
| people without an account but I really wish they would remove
| this dark pattern for those of us who are actually paying them.
| ThrowawayTestr wrote:
| I always make sure to thank chatgpt when it does a good job. I
| like to think I'm helping to improve the model :)
| InvertedRhodium wrote:
| I do it on the off chance that the ChatGPT history is used to
| train future AGI's, and want to ensure that my descendants
| are at least considered well behaved pets if nothing else.
| andrewmutz wrote:
| Roko does the same thing, but for different reasons
| ClassyJacket wrote:
| I'd settle for just not logging me out every day.
___________________________________________________________________
(page generated 2024-04-01 23:01 UTC)