[HN Gopher] Anthropic hires OpenAI co-founder Durk Kingma
___________________________________________________________________
Anthropic hires OpenAI co-founder Durk Kingma
Author : coloneltcb
Score : 140 points
Date : 2024-10-01 18:03 UTC (4 hours ago)
(HTM) web link (techcrunch.com)
(TXT) w3m dump (techcrunch.com)
| nuz wrote:
| It's exciting to wonder where Anthropic will be in like 5 years
| time with this incredible momentum
| Workaccount2 wrote:
| Where society will be...
| infinitezest wrote:
| Let's catch up about this together while we're waiting in the
| breadline.
| Alupis wrote:
| > Anthropic's CEO, Dario Amodei, was once the VP of research at
| OpenAI, and reportedly split with the firm after a disagreement
| over OpenAI's roadmap -- namely its growing commercial focus.
|
| So he's now the CEO of Anthropic, a company selling AI services?
|
| Claude is amazing, and we use it's Teams plan here at the office
| extensively (having switched from ChatGPT since Claude is vastly
| better at technical material and adcopy writing).
|
| But, Anthropic definitely has a commercial motive... no?
|
| I'm not saying a commercial motive is a bad thing - hardly... but
| this quote seems to be odd given the circumstances.
| _boffin_ wrote:
| Can you go into more depth why you believe Claude is better @
| creating adcopy
| Alupis wrote:
| In my experience it is better at not sounding like an LLM
| wrote it, even without being directed to not sound like an
| LLM. It's better able to find and maintain the desired tone
| (playfulness, silly, professional, a mixture of, etc) with
| minor prompting. It also seems better at understanding your
| business/company and helping craft adcopy that's on-
| message/theme.
|
| We used ChatGPT's Teams plan too with GPT4, but were sold on
| Claude almost immediately. Admittedly we have not used GPT4o
| recently, so we can't compare.
|
| With technical information, Claude is vastly better at
| providing accurate information, even about lesser-known
| languages/stacks. For example, it's ability to discuss and
| review code written in Gleam, Svelte, TypeSpec and others is
| impressive. It is also, in our experience, vastly better at
| "guided/exploratory learning" - where you probe questions as
| you go down a rabbit hole.
|
| Is it always accurate? Of course not, but we've found it to
| be on average better at those tasks than ChatGPT.
| bbor wrote:
| You're absolutely correct that they're a for-profit firm, but
| you're missing that they were founded specifically over safety
| concerns. Basically it's not just "commercial motive" in
| general, it's the sense that OpenAI was only paying lipservice
| to safety work as a marketing move.
|
| For example, here's their research mission:
| https://www.anthropic.com/research
|
| And an example of one of their early research focuses,
| Constitutional AI: https://arxiv.org/abs/2212.08073
| Mistletoe wrote:
| At least Anthropic is honest about their intentions though.
| That would be enough for me to leave OpenAI. Hey you want to
| commercialize it sure but don't hide behind lies.
| blitzar wrote:
| Pivoting from "for all mankind" to "all for myself" would make
| me deeply uncomfortable, too. The change from one position to
| the other, not either position in any absolute sense, is the
| concerning part.
| ygjb wrote:
| I think that the approach that Anthropic is taking to
| governance is a little different than "all for myself", it's
| worth having a read of https://www.anthropic.com/news/the-
| long-term-benefit-trust
|
| It sounds like they are least trying to build on the notion
| of being a public benefit corporation, and create a business
| that won't devolve into _chart must go up and to the right
| each quarter_.
|
| Time will tell of course, OpenAI was putatively started with
| good, non-profit intentions.
| freejazz wrote:
| Does anyone actually believe any of this horsesh*t?
| bbor wrote:
| This is also a great point. I ranted at length about this
| when the OpenAI news broke last week, but to cut it short:
| it's a little troubling to see the company founded on the
| ethos "for-profit AI work is incredibly dangerous" transition
| to a for-profit AI firm openly engaged in an arms race. Not
| just engaged, _inciting_...
|
| https://web.archive.org/web/20230714043611/https://openai.co.
| ..
| nurettin wrote:
| > Pivoting from "for all mankind" to "all for myself"
|
| Isn't the former already a red flag?
| llamaimperative wrote:
| No it's good to try to build tech that helps people.
| Doesn't mean such declarations need to be taken at face
| value, but being baseline-cynical is generally unwarranted,
| undesirable, and uninteresting.
| TiredOfLife wrote:
| As I understand the main disagreement between OpenAI and
| Anthropic is exactly how much and what is censored.
| CaptainFever wrote:
| I'm not sure as to what Anthropic means when they mean
| safety. I remember them doing good, non-censorship work in
| this field, but I also pay for ChatGPT instead of Claude
| because Claude is just so censored and boring.
| heroprotagonist wrote:
| I don't think it's odd.
|
| * Acting in accordance with declared motivations is a
| demonstration of integrity.
|
| * Acting towards hidden motivations that oppose your declared
| motivations is deceptive action.
|
| Honest people don't want to lead and be responsible for
| deceptive action, even if the action is desirable.
|
| For these types of people, it is often better to leave a place
| that requires them to active deceptively in favor of one that
| will let them operate with integrity.
|
| Even if the end goal is the same, eg: to make money.
| ulfw wrote:
| Left door, right door
| bbor wrote:
| Always trippy for us apocalyptic optimists to read coverage about
| safety concerns and consolidating power in AI firms that reads
| exactly like these companies have been reporting for 20 years for
| smart phone apps, B2B SASS battles, and hospitality industry
| schemes. Reminds me of today's articles on the escalating war
| involving at least one nuclear power mentioning the Dow Jones as
| the fourth bullet point, but on an even larger and more
| ridiculous scale.
|
| Godspeed to Anthropic! Hopefully they can be a force for good,
| despite the various deals with the devil that they've taken.
| They've lost so many safety and e/acc people that I was getting
| dubious, but they certainly are staying in the fight.
|
| Shame they're already for-profit... But don't worry, they Pinky
| Promise to be For The Public Benefit :)
| throwup238 wrote:
| _> Shame they 're already for-profit... But don't worry, they
| Pinky Promise to be For The Public Benefit :)_
|
| Anthropic is legally a Delaware Public Benefit corporation so
| it's written into their corporate governance.
|
| How effective that governance will be at balancing the public
| benefit with profit remains to be seen, but it's a lot more
| than just a pinky promise.
| bbor wrote:
| Thanks for the correction! I was mixing up "Public Benefit
| Corporation", which is a legal offering by state governments,
| and "B-Corp", which is a non-profit that certifies wholesome
| for-profit firms like the Tillamook Dairy Cooperative.
|
| I'd stand by the general assertion that it's little more than
| a pinky promise because they merely have to "balance" the
| concerns according to "any reasonable person"--an extremely
| weak-seeming obligation to this non-lawyer--but it's
| certainly much more impactful than I thought, namely:
| Sections 365 (b) and (c) provide broad protection to
| directors of public benefit corporations against claims based
| on interests other than those of stockholders
|
| https://www.legis.delaware.gov/BillDetail?LegislationId=2235.
| ..
|
| Good on you, Anthropic! In this specific case I believe in
| the director(s) a lot more than I believe in the shareholders
| ethics-wise, so it seems like a perfect choice. They can
| always fire him/them I suppose, but truly catastrophic AI
| risks would move faster than that, anyway.
| sirspacey wrote:
| Yes, this is the real legal accomplishment of B-Corps!
|
| By law and precedent a C-Corp's only obligation is to
| shareholders, thanks to a case from almost a century ago: h
| ttps://en.m.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.#:~:t
| ....
|
| A B-Corp was the first and somewhat successful attempt to
| create a legal framework where company executives are
| allowed to work on behalf of all their stakeholders without
| it creating an automatic basis for a suit.
|
| Generally, people who care very deeply about a thing bring
| a higher ethical standard than any regulatory body can
| impose.
| conshama wrote:
| Seems to me that the only difference between Anthropic and OpenAI
| is that Anthropic was for-profit from day one and OpenAI is from
| day yesterday. I pay for both, and pretty sure they will do
| everything they can to take as much money from me that they can
| get away with.
|
| This shouldnt be news.
| m463 wrote:
| What if your city police force or volunteer firemen _switched_
| to for-profit?
|
| I think that is the crux of the matter.
| astrange wrote:
| The US is unique in how many public services it has. Other
| countries have private firefighter services; that just means
| the city has a contract with them. It doesn't mean they burn
| your house down and charge you for it.
| m463 wrote:
| maybe a better analogy would be:
|
| People set up and fund a public bus system that has
| coverage for all neighborhoods, rich or poor, distant or
| close.
|
| And then after the bus system is up and running, the bus
| system manager decides transportation is important! He IPOs
| the bus system, and changes all the routes to money-making
| routes with cost optimized (higher) fares.
| ipaddr wrote:
| If they started paying volunteer it would tell me the town
| has more money not they will do a bad job now that they are
| paid.
| p1esk wrote:
| It depends on what would actually change.
| thelittleone wrote:
| The for profit from the start is true, but also, Sam's
| shenanigans really irks me as a customer, the slights of hand
| etc. I get the sense he would mislead the public on threats and
| risks of AI to benefit OpenAI and the government, to centralize
| and monopolize powerful models.
| scop wrote:
| As somebody new to Claude, can anybody give me tips for how to
| optimally use Claude as opposed to habits formed with ChatGPT?
| For example, my main concern is the limiting of messages over a
| given time period, even for paid accounts. I have often used
| ChatGPT for very specific questions/answers, but sending a large
| collection of "drill-down" follow up questions can burn through
| my Claude messages pretty quickly? Is it as simple as composing
| longer, more fleshed out prompts to begin with (addressing follow
| ups ahead of time?) or is this where something like Projects
| helps? Thanks for any feedback!
| throwup238 wrote:
| Projects can make it worse because I think the Claude rate
| limiting is token based. If I fill up a project to >100k
| tokens, I get rate limited much faster.
| light_hue_1 wrote:
| My tip would be to use openai instead.
|
| There's an arms race. Openai was ahead. Then anthropic was
| ahead. Now gpt4o and o1 are better again. This may change in a
| few months.
|
| I'll miss the projects feature though.
| logicchains wrote:
| Gpt4o and o1 are absolutely not better at coding than Claude.
| They're noticeably worse at following complex, detailed
| prompts (e.g. a detailed prompt for it to create an
| application), and often forget details during the
| conversation (e.g. will revert to an old version of some code
| it already refactored). o1 is better than Claude for Leetcode
| hard style coding problems, but the majority of coding work
| isn't about that, it's about correctly implementing a spec.
| Plus even o1 will still often fill code with "implemention
| here" comments, in spite of being explicitly asked to provide
| a full working implementation.
| light_hue_1 wrote:
| I write a lot of code with both in multiple languages. o1
| is astronomically better than Claude. It's not even a
| competition.
| thelittleone wrote:
| Highly recommend using the API with a good client. It's cheaper
| and limits are way higher. I rarely hit my limit which is just
| a billing limit I impose intentionally.
| paxys wrote:
| Mira Murati next?
| wswope wrote:
| As much as I dislike OpenAI's ongoing shenanigans and disdain for
| their own customers, I tried to sign up for Claude last week.
|
| Turns out that Anthropic's signup flow has been silently broken
| for months for Firefox users:
| https://old.reddit.com/r/ClaudeAI/comments/1bq06yz/phone_ver....
| You get the SMS verification code, and you can enter it, but you
| get a barely visible "Invalid verification code" error message
| followed near-instantly by a refresh of the page. I reached out
| to support, but like many others, heard nothing back.
|
| This barely-disguised contempt for what should likely be their
| most valuable power-user base suggests to me that a lot of the
| recent departures from OpenAI are being driven by push instead of
| pull, and I'm not convinced that Anthropic will remain a
| competent competitor in the LLM arms race long-term.
| HelloMcFly wrote:
| For what it's worth I signed up for Claude on Firefox without
| issues several days ago. I'm not saying the issue isn't real,
| but it isn't universal on the browser.
| wswope wrote:
| Yeah - I run the ESR version of Firefox, which is probably
| the root of it, but wide prevalence of people hitting the
| same issue for so long + the total radio silence from support
| is what really threw me. At least OpenAI pretends to care by
| responding with AI-generated pseudohelp...
| mistrial9 wrote:
| > total radio silence from support
|
| the new customer reality, courtesy Google etc..
| hannofcart wrote:
| I'd like to add to this.
|
| I signed up and paid for credits to access their API last
| weekend.
|
| All requests still get rejected saying I don't have sufficient
| credit. This is despite their dashboard saying that I do indeed
| have the requisite credits.
|
| No response despite reaching out to support.
|
| Don't think I have been treated this indifferently by any other
| service in recent times.
| richard___ wrote:
| Lol - talk about making a mountain out of an anthill
| wswope wrote:
| You're not wrong, but I'm more leaning on the anecdote to
| make a broader case that the big walled-garden players
| (Google/OAI/Anthropic) all kinda suck in similar ways.
|
| I.e. - I think Anthropic is seeing a boon right now not
| because they're doing things right, but because the
| competition is doing them worse.
| ygjb wrote:
| > This barely-disguised contempt for what should likely be
| their most valuable power-user base suggests
|
| Yikes. I am a long time Mozilla supporter, active user of
| Firefox since before it was Firefox, and former Mozilla
| employee, but this comment is pretty crazy.
|
| Firefox is well below 3% market share, and is essentially a
| niche browser at this point - it sucks when I run into sites
| and services that aren't supported by Firefox, but I don't
| assume that it's contempt for me as a Firefox user. I simply
| assume that I, as a power user, have opted to use an alternate
| tool that has features that are compelling to me, and I
| certainly don't expect every business out there to prioritize
| my use of a niche tool.
|
| I learned a long time ago that while power users can be an
| effective avenue for building a market for niche products, they
| also end up being some of the most problematic users, because
| of the assumptions that power user needs should be placed above
| the regular users. It's fine to want to be catered to, but it's
| not really great to assume malice when you aren't - it shows
| contempt for the prioritization of the limited resources they
| have available.
| wswope wrote:
| Firefox is going to be massively disproportionately more
| popular among LLM users, the issue has been happening to
| people for over half a year, the error message is terribly
| misleading, and from a CS POV, it should be relatively easy
| to flag and route customers around the known issue.
|
| I'm not salty just because they don't support the browser;
| you're totally right that that'd be an unreasonable take, but
| it's not the one I'm trying to make.
| mvdtnz wrote:
| > Firefox is going to be massively disproportionately more
| popular among LLM users
|
| Source for this claim?
| kombine wrote:
| Can anybody suggest a good open source (GUI or terminal-based)
| app for chatting with Claude Sonnet for those who have API keys?
| I use those for a neovim plugin to chat given the context of
| codebase, but I would also like an ability to have a regular chat
| like in the web interface?
| mathrawka wrote:
| https://pypi.org/project/llm/
|
| I use Open WebUI for when I want a website with some more
| features than a terminal provides.
| jakubtomanik wrote:
| LibreChat
| jakubtomanik wrote:
| https://www.librechat.ai/
| OutOfHere wrote:
| Anthropic is a joke of a company. They have all these heavyweight
| hires, but they make logins so difficult for the user that no
| chat user in their sane mind would want to use Anthropic. It's as
| if Anthropic doesn't actually want people using their service.
|
| They routinely keep logging me out, also always making me wait
| for an email confirmation code just to login every time, and it's
| sickening.
|
| They also promise API credits but then don't actually give any.
| rsstack wrote:
| The chat on the website is not their main product... They're
| selling access to their models to enterprises. As sickened as
| you are by having to log in to the chat, that's not an
| indication of their success at training and marketing high-
| quality models (the only real competition to OpenAI at this
| point).
| OutOfHere wrote:
| > The chat on the website is not their main product
|
| Guess what: enterprises are made of people. People like to
| try things out. If people are not happy with something for
| their personal use, they most definitely are never going to
| recommend it to their employer. This is why OpenAI wins. It
| is in fact one of the factors that sets apart a hyper-
| successful product from a wannabe.
| llamaimperative wrote:
| Literally every single person I know who's building
| anything in AI (dozens of early stage founders) prefers
| Claude over GPT at this point.
|
| The AI emperor will not be the one who has the most
| consumers logging into product.com to use the chatbot.
|
| Compound this with OpenAI's continuous shedding of, as far
| as I can tell, every credible researcher... I find your
| position quite hard to believe, even accounting for the
| hysterical tone.
| OutOfHere wrote:
| > Literally every single person I know who's building
| anything in AI (dozens of early stage founders) prefers
| Claude over GPT
|
| I don't believe this at all. I am not here to argue that
| Claude is a worse model, only that Anthropic is a worse
| company.
|
| > The AI emperor will not be the one who has the most
| consumers logging into product.com
|
| Your point only goes to show how much Anthropic hates its
| end users.
|
| > OpenAI's continuous shedding of, as far as I can tell,
| every credible researcher
|
| OpenAI has zero trouble hiring great talent. As I see it,
| they lost a lot of dead weight that had no interest in
| bringing AI to the masses, but had an agenda of their own
| instead.
| llamaimperative wrote:
| "Your point only goes..."
|
| Huh? How so? Sorry not even clear what your complaint
| is... is it the Firefox (3% market share) login bug? The
| Claude chat experience has been superior for a while now,
| and Projects and Artifacts make it 100x so.
|
| Good at hiring and bad at retaining is much worse than
| the reverse, especially for long-lived R&D projects.
| OutOfHere wrote:
| > what your complaint is
|
| It helps to read. It's noted in the original comment. It
| has nothing whatsoever to do with Firefox, as it
| manfiests only on the Anthropic website.
| MOARDONGZPLZ wrote:
| Shredding you say?
| piva00 wrote:
| I'm a chat user, hopefully with a sane mind, and their login
| process might be annoying but doesn't stop me from using it. I
| get logged out once every 2-4 weeks maybe?
| thelittleone wrote:
| My experience has been opposite. I'm using the API. I hit usage
| tier limits so sent them a request to upgrade my tier. Heard
| back within an hour and upgraded to the highest tier. I had
| another request also which was answered same day.
| OutOfHere wrote:
| That's good but my complaint is not about the API at all. It
| is more foundational in that most people would want to use
| the API (for applications) only if the web chat works well
| for them, which it doesn't.
| modeless wrote:
| Kingma is most notable for writing one of the most cited papers
| in AI. Actually one of the most cited scientific papers ever
| published, right up there with the transformers paper if not
| higher. "Adam: A Method for Stochastic Optimization"
| https://arxiv.org/abs/1412.6980
|
| I remain astonished that Adam continues to be the most widely
| used optimizer in AI 10 years later. So many contenders have
| failed to replace it.
| supafastcoder wrote:
| I mean, there's only so many ways to optimize a black box
| LarsDu88 wrote:
| The number of papers that use it exceed the number of papers
| that cite it probably by 100x!
| OutOfHere wrote:
| (delete)
| esafak wrote:
| That's an architecture, not an optimizer. You can probably
| use ADAM with KANs. I think you latched on to the
| transformers in the sentence, but Kingma did not invent
| those.
| p1esk wrote:
| In my opinion, he is more notable for inventing a variational
| autoencoder.
| brcmthrowaway wrote:
| I hate the penchant Claude has for abstracting everything into
| one line functions
___________________________________________________________________
(page generated 2024-10-01 23:01 UTC)