[HN Gopher] Support for Claude Sonnet 3.5, OpenAI O1 and Gemini ...
___________________________________________________________________
Support for Claude Sonnet 3.5, OpenAI O1 and Gemini 1.5 Pro
Author : benocodes
Score : 51 points
Date : 2024-10-31 17:41 UTC (5 hours ago)
(HTM) web link (www.qodo.ai)
(TXT) w3m dump (www.qodo.ai)
| senko wrote:
| Apparently that's Codium, who have recently renamed themselves to
| Qodo: https://www.qodo.ai/blog/introducing-qodo-a-new-name-the-
| sam... (TIL)
| SquareWheel wrote:
| It seems like their new name is literally "Qodo (formerly
| Codium)", parenthetical included. At first I thought they were
| just including it in the blog post for clarity, but they
| literally write it out that way a dozen times. It's also
| included as part of their new logo and in the site title.
|
| I've never seen anything like that before. It feels like a
| search+replace operation gone awry.
| superfrank wrote:
| SEO reasons maybe?
| emmanueloga_ wrote:
| Why is "Qodo" "better" than "Codium"?
| vunderba wrote:
| Not sure maybe they felt like SEO results would rank VS
| Codium over it?
| swyx wrote:
| no theres another series b startup codeium that they get
| confused for all the time. we talked to both
|
| - https://latent.space/p/codium-agents
|
| - https://latent.space/p/varun-mohan
| tourmalinetaco wrote:
| Maybe it's a joke? Like "X, formerly Twitter" or "Ye,
| formerly Kanye", or even "the artist formerly known as
| Prince".
| gronky_ wrote:
| I tried generating the same test with all 5 models in Qodo Gen.
|
| o1 is very slow - like, you can go get a coffee while it
| generates a single test (if it doesn't time out in middle).
|
| o1-mini thought worked really well. It generated a good test and
| wasn't noticeably slower than the other models.
|
| My feeling is that o1-mini will end up being more useful for
| coding than o1, except for maybe some specific instances where
| you need very deep analysis
| superfrank wrote:
| How well did it work for generating tests? I was looking for an
| AI test generation tool yesterday and I came across this and it
| wasn't clear how good it is.
|
| (before I get a bunch of comments about not letting AI write
| tests, this is for a hobby side project that I have a few hours
| a week to work on. I'm looking into AI test generation because
| the alternative is no tests)
| gaze wrote:
| I guess this is as good a place as any to ask -- what's
| everyone's favorite AI code assist tool?
| mattnewton wrote:
| cursor. First one I've tried that seems like it's more than a
| neat demo.
|
| - but I'm weird and I usually disable tab completion, I find
| having generations popping up while I'm typing slows me down, I
| gotta read them and think about it, feels like it's giving me
| ADD. So I've always kinda been a copilot hater. Lots of people
| find this more productive, and a fancy version of it is on by
| default in Cursor. However, Cursor implemented a bunch of
| different interfaces well not just the copilot one, and I find
| the chat window in your editor for churning out boilerplate or
| refactors is a huge productivity win personally. There are a
| lot of one-off refactors that are annoying enough that I
| wouldn't want to dedicate an afternoon to them but now they are
| taking me just a few minutes of reviewing AI changes.
| written-beyond wrote:
| Exactly why I never went with getting copilot, I instead got
| a chatgpt subscription and prompt for stuff I need.
|
| I do sort of regret it too, sometimes you just want to give
| more context and it's a hassle at that time. Figuring out
| what is it you need to paste to ensure the model has adequate
| context to generate something valid. Also, Claude is
| magnitudes superior to ChatGPT anything. Both are terrible at
| implementing abstract completely unique code blocks, however
| ChatGPT is significantly more "markov-y" when it comes to
| generating any code. When Claude gets things wrong it feels
| like a more human mistake.
|
| Anyway, with 50% of HN obsessing over Cursor, is it worth it?
| I couldn't get it to open projects I have in WSL2 and I kind
| of gave up at that point. I've gotten far with Claude's free
| tier and $20 for just cursor seems steep for something that's
| not as stable.
|
| Have you assessed Zed auto completion or read about others
| experience of it? Zed seems like something with a more stable
| foundation than any of these VSCode forks.
| infecto wrote:
| Cursor has been my favorite so far but I also have never
| tried Codium. Copilot was the winner prior but honestly its
| just tab completion. I tried Jetbrains but it felt janky
| and slow. Cursor tab completion feels nicer, its super fast
| and will do updates based code changes. I like being able
| to quickly get it to write some code updates and it returns
| in a green/red line like a github PR. The flow is really
| nice for me and I am looking forward to the future.
| Alifatisk wrote:
| Cursor.
| jonathaneunice wrote:
| Cursor.
|
| All in on tab completion and its other UI/UX advances
| (generate, chat, composer, ...)
| nicce wrote:
| Zed's integrated tools have been more than enough for me.
| aberoham wrote:
| aider-chat
| victorbjorklund wrote:
| Aider AI.
| emmanueloga_ wrote:
| Recent: https://news.ycombinator.com/item?id=41819039
| ghawkescs wrote:
| Same question, but for VSCode plugins. Besides copilot what is
| everyone using? Claude support is a huge plus.
| Y_Y wrote:
| Emacs.
| sunaookami wrote:
| Cody
| edm0nd wrote:
| I've been loving Claude Sonnet for python
| haliliceylan wrote:
| How is that free ???
| rtsil wrote:
| Presumably free refers to the users on their free plan, which
| does not include code generation/autocomplete except for tests.
| decide1000 wrote:
| I use Tabnine. It supports many models, including Claude. I find
| the output better compared to CoPilot. My IDE's are from
| Jetbrains and I work in Python and PHP mainly.
___________________________________________________________________
(page generated 2024-10-31 23:00 UTC)